Learn Before
Diagnosing Value Function Training Issues
Given the following case study and the associated loss function, explain the most likely reason for the observed training behavior. Specifically, how can the value function achieve a low loss without providing a useful signal to improve the model's text generation?
0
1
Tags
Ch.4 Alignment - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
During a reinforcement learning update for a language model, the value function is trained to predict future rewards. At a specific step, the value function's output for the current state is
V_current = 3.0. The model then generates a token, for which a reward model provides a score ofr = 0.5. The value function's output for the new state isV_next = 4.0. Assuming a discount factor ofγ = 0.9, the training objective is to minimize the squared difference betweenV_currentand a target value. Based on these figures, what does the training objective imply about the initial predictionV_current?Diagnosing Value Function Training
Diagnosing Value Function Training Issues