Evaluating a Distillation Training Strategy
A team is training a small 'student' model to perform a task by learning from a very large 'teacher' model. Their initial approach, which required calculating a loss function over the teacher's entire probability distribution for all 200,000 possible outputs, is computationally infeasible.
A team member proposes a new strategy:
- For each input in the training data, generate a single, specific output using the teacher model.
- Train the student model using this single, teacher-generated output as the correct target.
Analyze this proposed strategy. Explain precisely how it addresses the computational problem and identify the primary type of information from the teacher's knowledge that the student model no longer has access to with this method.
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
A team is developing a small, efficient text-generation model (the 'student') by training it to imitate a much larger, powerful model (the 'teacher'). Their training method requires the student to learn from the full probability distribution the teacher assigns over all possible next words. They discover this is computationally infeasible because their vocabulary contains hundreds of thousands of words, and calculating the training objective for a single example requires summing over this entire vocabulary. Which of the following strategies provides the most practical solution to this specific computational problem while still using the teacher's guidance?
Evaluating a Distillation Training Strategy
Optimizing a Distillation Training Process