Short Answer

Evaluating a Distillation Training Strategy

A team is training a small 'student' model to perform a task by learning from a very large 'teacher' model. Their initial approach, which required calculating a loss function over the teacher's entire probability distribution for all 200,000 possible outputs, is computationally infeasible.

A team member proposes a new strategy:

  1. For each input in the training data, generate a single, specific output using the teacher model.
  2. Train the student model using this single, teacher-generated output as the correct target.

Analyze this proposed strategy. Explain precisely how it addresses the computational problem and identify the primary type of information from the teacher's knowledge that the student model no longer has access to with this method.

0

1

Updated 2025-10-05

Contributors are:

Who are from:

Tags

Ch.3 Prompting - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science