Learn Before
Multiple Choice

A research team observes that a large language model, pre-trained on a massive text corpus, requires a surprisingly small dataset of instruction-following examples to become a helpful assistant. According to the Superficial Alignment Hypothesis, what is the most accurate explanation for this observation?

0

1

Updated 2025-10-03

Contributors are:

Who are from:

Tags

Ch.4 Alignment - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science