Learn Before
Slow Reduction Phase in LLM Scaling
In the initial phase of scaling a Large Language Model, when the training dataset is relatively small, the model's test error decreases at a slow rate. During this stage, increasing the amount of training data yields only marginal improvements in performance.

0
1
Tags
Ch.2 Generative Models - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Related
Power-law Reduction Phase in LLM Scaling
Convergence Phase of LLM Scaling (Irreducible Error)
Slow Reduction Phase in LLM Scaling
A research team is training a language model and plots its test error against the training dataset size on a log-log scale. The resulting curve shows three distinct regions in sequence: an initial region with a slow, shallow decline in error; a second region with a steep, rapid decline; and a final region where the curve flattens and error reduction becomes minimal. Which of the following is the most accurate interpretation of the final region where the curve flattens?
A researcher is training a large language model and plots its test error against the training dataset size on a log-log scale. The resulting curve shows three distinct stages of performance improvement. Arrange these stages in the order they typically occur as the dataset size increases from small to very large.
Strategic Resource Allocation for LLM Training
Learn After
A research team is training a language model. They meticulously track its performance on a fixed test set as they incrementally add more training data. They observe that doubling the dataset size from 5 billion to 10 billion tokens resulted in only a very small decrease in the model's test error. Based on this observation, which of the following is the most sound judgment of their plan to immediately acquire another 90 billion tokens of data?
A research team is training a new language model and records the following test error rates as they increase the size of the training dataset:
- 1 billion tokens: Error 3.50
- 2 billion tokens: Error 3.45
- 4 billion tokens: Error 3.42
- 8 billion tokens: Error 3.10
- 16 billion tokens: Error 2.50
Based on this data, at what point does the model most clearly transition out of the initial, slow improvement stage of training?
Evaluating a Training Strategy for a New LLM