Learn Before
A team starts with a large language model that is highly proficient at a wide range of general language tasks, including text summarization, translation, and question-answering. They then fine-tune this model exclusively on a new, highly specialized dataset of legal document summaries. After this training, the model becomes excellent at summarizing legal documents but is now significantly worse at performing general translation than it was before. Which phenomenon does this scenario most directly demonstrate?
0
1
Tags
Ch.2 Generative Models - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Mitigation Strategies for Catastrophic Forgetting
A team starts with a large language model that is highly proficient at a wide range of general language tasks, including text summarization, translation, and question-answering. They then fine-tune this model exclusively on a new, highly specialized dataset of legal document summaries. After this training, the model becomes excellent at summarizing legal documents but is now significantly worse at performing general translation than it was before. Which phenomenon does this scenario most directly demonstrate?
Diagnosing Performance Degradation in a Fine-Tuned Model
Illustrating a Key Fine-Tuning Challenge