Short Answer

Prioritizing Pre-training Efforts

A machine learning team is developing a new language model and is debating how to allocate their resources. They can either spend the next six months designing a novel, more sophisticated pre-training objective, or they can use their existing, simpler objective and focus all their efforts on acquiring a much larger dataset and securing more computational power for training. Based on the general principle demonstrated by the evolution of large-scale pre-trained models, which strategy is more likely to yield a more performant model? Justify your reasoning.

0

1

Updated 2025-10-06

Contributors are:

Who are from:

Tags

Ch.1 Pre-training - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Evaluation in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science