During the policy optimization phase of training a large language model, the model is being rewarded for providing detailed explanations. The 'reference policy' is a version of the model that typically gives concise, direct answers. The current policy generates two possible responses to a user's query:
Response A: 'Yes.' Response B: 'Affirmative, the data you have presented aligns with the expected parameters, and therefore, the conclusion you have reached is indeed correct and validated.'
Assuming the reference policy would have a very high probability of generating Response A and a near-zero probability of generating Response B, which response would incur a larger penalty term designed to prevent deviation from the reference policy, and why?
0
1
Tags
Ch.4 Alignment - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Overall PPO Objective Function for Language Models
During the policy optimization phase of training a large language model, the model is being rewarded for providing detailed explanations. The 'reference policy' is a version of the model that typically gives concise, direct answers. The current policy generates two possible responses to a user's query:
Response A: 'Yes.' Response B: 'Affirmative, the data you have presented aligns with the expected parameters, and therefore, the conclusion you have reached is indeed correct and validated.'
Assuming the reference policy would have a very high probability of generating Response A and a near-zero probability of generating Response B, which response would incur a larger penalty term designed to prevent deviation from the reference policy, and why?
Consequences of Policy Regularization Strength
Analysis of the Policy Regularization Penalty
Your team is running RLHF for a customer-facing LL...
You’re running an RLHF fine-tuning job for an inte...
You are reviewing an RLHF training run for an inte...
Diagnosing Instability in an RLHF + PPO Training Run
Interpreting Conflicting RLHF Signals: Reward Model Ranking vs. PPO Updates Under KL Regularization
Choosing and Justifying an RLHF Objective Under Competing Product Constraints
Designing an RLHF Training Blueprint for a Regulated Customer-Support LLM
Tuning an RLHF + PPO Update When Reward Improves but Behavior Regresses
Post-Deployment Drift After RLHF: Diagnosing Reward Model and PPO/KL Interactions
Root-Cause Analysis of a “Reward Hacking” Spike During RLHF with PPO