Essay

Evaluating Pre-training Strategies for Generalizability

A development team is deciding between two pre-training strategies for a new foundational language model.

  • Strategy X: Train the model on a massive, highly diverse dataset from the web with a general objective like predicting masked words.
  • Strategy Y: Train the model on a curated, high-quality but narrow dataset of scientific research papers with a more specific objective like text summarization.

The ultimate goal is to create a model whose parameters can be effectively adapted for a wide variety of future applications, from chatbot conversations to code generation. Evaluate the two strategies, arguing which is more likely to achieve the stated goal and explaining the underlying principles that guide your decision.

0

1

Updated 2025-10-03

Contributors are:

Who are from:

Tags

Ch.1 Pre-training - Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Evaluation in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science