An engineer is fine-tuning a language model and observes that the training process is highly unstable. The model's performance fluctuates wildly, and the training loss sometimes spikes dramatically, suggesting the policy updates are too aggressive. Which of the following modifications to the optimization objective is most specifically designed to counteract this problem by directly constraining the magnitude of policy changes at each step?
0
1
Tags
Ch.4 Alignment - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
An engineer is fine-tuning a language model and observes that the training process is highly unstable. The model's performance fluctuates wildly, and the training loss sometimes spikes dramatically, suggesting the policy updates are too aggressive. Which of the following modifications to the optimization objective is most specifically designed to counteract this problem by directly constraining the magnitude of policy changes at each step?
Stabilizing an Erratic Training Process
Analyzing the Impact of a Policy Divergence Penalty