Short Answer

Analyzing the Knowledge Distillation Hyperparameter

Consider the combined training objective for a student model in knowledge distillation: θ~=argmaxθ(x,y)DlogPrθs(yx)λLosskd\tilde{\theta} = \arg \max_{\theta} \sum_{(\mathbf{x}, \mathbf{y}) \in \mathcal{D}} \log \Pr_{\theta}^{s}(\mathbf{y}|\mathbf{x}) - \lambda \cdot \text{Loss}_{\text{kd}} Explain the potential negative consequence for the student model's performance if the hyperparameter λ is set to a very high value, and justify your explanation by referencing the two main components of the formula.

0

1

Updated 2025-10-03

Contributors are:

Who are from:

Tags

Ch.4 Alignment - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science

Related