Learn Before
A technology company claims it can create a perfectly helpful and harmless AI assistant by simply pre-training a model on an exhaustive dataset containing all books, articles, and websites ever published. They argue that such a comprehensive dataset would make any subsequent training phase to align the model's behavior unnecessary. Which of the following statements provides the most critical evaluation of this claim's primary flaw?
0
1
Tags
Ch.4 Alignment - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Evaluation in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Two-Step Post-Pre-training Alignment Process
A technology company claims it can create a perfectly helpful and harmless AI assistant by simply pre-training a model on an exhaustive dataset containing all books, articles, and websites ever published. They argue that such a comprehensive dataset would make any subsequent training phase to align the model's behavior unnecessary. Which of the following statements provides the most critical evaluation of this claim's primary flaw?
Rationale for Post-Training Alignment
Critique of a Pre-training-Only Approach