Learn Before
A research team observes that a large language model, pre-trained on a massive text corpus, requires a surprisingly small dataset of instruction-following examples to become a helpful assistant. According to the Superficial Alignment Hypothesis, what is the most accurate explanation for this observation?
0
1
Tags
Ch.4 Alignment - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Interpreting LLM Training Observations
A research team observes that a large language model, pre-trained on a massive text corpus, requires a surprisingly small dataset of instruction-following examples to become a helpful assistant. According to the Superficial Alignment Hypothesis, what is the most accurate explanation for this observation?
Explaining the Role of Fine-Tuning