Strategic Investment in Model Scaling
The lab has secured funding for one final, major expansion of the model. Based on the observed performance trend, should they allocate the entire budget to increasing the model's size again? Justify your recommendation, explaining the underlying principle that governs this situation.
0
1
Tags
Ch.2 Generative Models - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Evaluation in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Improved Power Law Formula for LLM Loss
A research team trains a series of language models with progressively more parameters on a fixed, large dataset. They plot the final test loss for each model against its parameter count. They observe that as the models get larger, the loss decreases, but the rate of improvement slows down, and the loss curve appears to be flattening out, approaching a small positive value instead of zero. Which of the following statements provides the most accurate interpretation of this phenomenon?
Analyzing Irreducible Error in LLM Scaling
Strategic Investment in Model Scaling
A research team is using a scaling law model that includes an irreducible error term to predict the performance of their next-generation language model. Their model predicts that even with a trillion parameters, the test loss will not drop below 0.05. This prediction implies that the inherent ambiguity and noise within their training and test data fundamentally limit the model's maximum possible performance on that data.