Multiple Choice

A research lab is developing a new language model. Their primary goal is to create a model that can reliably handle tasks and data types it was not explicitly trained on, such as analyzing niche scientific papers and summarizing newly emerging slang on social media. They are considering two main training strategies:

Strategy A: Curate a massive, diverse dataset from a wide range of sources (books, web pages, code, academic articles, social media) and use the majority of their computational budget for an extensive pre-training phase.

Strategy B: Use a smaller, more generic dataset for a quick pre-training phase, and then dedicate the majority of their computational budget to meticulously fine-tuning the model on hundreds of specific, narrow tasks.

Based on empirical findings about model generalization, which strategy is more likely to achieve the lab's primary goal and why?

0

1

Updated 2025-10-03

Contributors are:

Who are from:

Tags

Ch.4 Alignment - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Evaluation in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science