Learn Before
Convergence Phase of LLM Scaling (Irreducible Error)
After a period of rapid improvement, the rate of error reduction slows as the model enters the convergence phase. The performance curve flattens and approaches a lower bound known as the 'irreducible error.' This floor on performance may be caused by factors such as inherent noise in the dataset, fundamental ambiguity in the language tasks, or the limitations of the model's architecture.

0
1
Tags
Ch.2 Generative Models - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Related
Power-law Reduction Phase in LLM Scaling
Convergence Phase of LLM Scaling (Irreducible Error)
Slow Reduction Phase in LLM Scaling
A research team is training a language model and plots its test error against the training dataset size on a log-log scale. The resulting curve shows three distinct regions in sequence: an initial region with a slow, shallow decline in error; a second region with a steep, rapid decline; and a final region where the curve flattens and error reduction becomes minimal. Which of the following is the most accurate interpretation of the final region where the curve flattens?
A researcher is training a large language model and plots its test error against the training dataset size on a log-log scale. The resulting curve shows three distinct stages of performance improvement. Arrange these stages in the order they typically occur as the dataset size increases from small to very large.
Strategic Resource Allocation for LLM Training
Learn After
A research team is developing a language model. They progressively increase the model's size and the amount of training data, observing that performance gains diminish significantly with each increase. The largest model shows almost no improvement over the second-largest, despite being much bigger. What is the most likely reason for this plateau in performance?
Strategic Decision for a Stagnant LLM
Analyzing the Performance Plateau in Model Scaling