Learn Before
Data Paradigms: Pre-training vs. Supervised Fine-Tuning
Contrast the data acquisition and preparation processes for a large language model's initial, general-purpose training phase with the subsequent phase where it is adapted for a specific, supervised task. In your analysis, explain why the data requirements for the second phase often present a more significant bottleneck than for the first.
0
1
Tags
Ch.4 Alignment - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
A research lab has a powerful, general-purpose language model that was trained on a vast, unlabeled corpus of internet text. They now want to adapt this model to perform a specialized task: accurately summarizing legal documents. Based on the typical data requirements for this adaptation process, what is the most significant and immediate challenge the lab will face?
Fine-Tuning for a Niche Domain
Data Paradigms: Pre-training vs. Supervised Fine-Tuning