Using KL Divergence for Knowledge Distillation Loss
An alternative approach to knowledge distillation loss involves directly minimizing the discrepancy between the output probability distributions of the teacher and student models. The Kullback-Leibler (KL) divergence is a common metric used to formulate this loss function, quantifying the 'distance' between the two distributions.

0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Related
Cross-Entropy Loss for Knowledge Distillation
Using KL Divergence for Knowledge Distillation Loss
A research team is training a small, efficient 'student' model to replicate the behavior of a large, powerful 'teacher' model. The team's goal is to find the optimal parameters for the student model () by minimizing a loss function over a dataset of simplified inputs (), as defined by the following objective:
Where is the teacher's output probability distribution and is the student's.
If the team mistakenly configures the training process to use the teacher's original, complex dataset instead of the intended simplified dataset , which of the following outcomes is the most direct and likely consequence for the student model?
Critique of a Modified Training Objective
Diagnosing a Knowledge Distillation Training Issue