Learn Before
Explaining the Role of Fine-Tuning
A colleague argues that the fine-tuning process is where a large language model learns most of its new skills and factual knowledge. Based on the idea that alignment is a 'superficial' adjustment, briefly explain why this argument is likely incorrect.
0
1
Tags
Ch.4 Alignment - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Comprehension in Revised Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Interpreting LLM Training Observations
A research team observes that a large language model, pre-trained on a massive text corpus, requires a surprisingly small dataset of instruction-following examples to become a helpful assistant. According to the Superficial Alignment Hypothesis, what is the most accurate explanation for this observation?
Explaining the Role of Fine-Tuning