Learn Before
Diagnosing Value Function Training
An engineer is training a language model using a reinforcement learning process. They observe that the model's value function, which is supposed to predict the expected future reward from a given state, is consistently overestimating. During a single training step, the value function's output for the current state is 5.0. The calculated target value, based on the immediate reward and the next state's value, is 3.5. The training objective is to minimize the squared error between the prediction and the target: (target - prediction)^2. To achieve this objective, how should the training algorithm adjust the value function's output for the current state, and why is this adjustment necessary?
0
1
Tags
Ch.4 Alignment - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Application in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
During a reinforcement learning update for a language model, the value function is trained to predict future rewards. At a specific step, the value function's output for the current state is
V_current = 3.0. The model then generates a token, for which a reward model provides a score ofr = 0.5. The value function's output for the new state isV_next = 4.0. Assuming a discount factor ofγ = 0.9, the training objective is to minimize the squared difference betweenV_currentand a target value. Based on these figures, what does the training objective imply about the initial predictionV_current?Diagnosing Value Function Training
Diagnosing Value Function Training Issues