Learn Before
Goal of Parallel Processing: Linear Scalability
The primary objective of parallel processing in distributed training is to achieve linear scalability. This means that the system's efficiency, measured by the number of samples processed per unit of time, should increase in direct proportion to the number of processing devices used.
0
1
Tags
Ch.2 Generative Models - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Related
Types of Parallelism in LLM Training
Goal of Parallel Processing: Linear Scalability
Complexity of Distributed Training
A research lab is training a language model so large that it would take several years to complete on a single computer. To speed up the process, they decide to use a cluster of 1,000 interconnected computers. Which of the following statements best analyzes the fundamental principle that allows this cluster to significantly reduce the training time?
Evaluating a Training Strategy
Explaining Training Efficiency
Learn After
A team is evaluating the performance of their distributed training setup. They measure the number of data samples processed per second as they increase the number of processing devices. Which of the following outcomes best illustrates the ideal goal of linear scalability?
Evaluating Distributed Training Scalability
Analyzing System Scalability