Essay

Diagnosing Instability in an RLHF + PPO Training Run

You are on an applied LLM team running RLHF to improve a customer-support assistant. Humans provide pairwise preferences over multiple candidate responses per prompt, and you train a reward model from these rankings. You then fine-tune the policy with PPO using an advantage-based policy-gradient objective, while also applying a KL-divergence penalty to keep the policy close to a frozen reference model.

After a few iterations, you observe the following pattern: (1) the reward model’s training loss continues to decrease and it correctly ranks held-out preference pairs more often, but (2) the PPO-trained policy starts producing noticeably longer, more repetitive answers and occasionally violates style/safety constraints that the reference model followed; the average reward-model score of sampled outputs increases, yet offline human spot-checks get worse.

Write an analysis that explains a plausible causal chain linking (a) reward model training as a ranking problem, (b) the advantage-weighted policy-gradient objective used in PPO, and (c) the role of the KL penalty in PPO’s composite objective. Your answer must propose at least two concrete, testable interventions (e.g., changes to data collection, reward model training, PPO/advantage estimation, or KL/β settings) and justify how each intervention would change the incentives/updates and address the observed failure mode. Be explicit about the tradeoff between maximizing the learned reward and staying close to the reference policy.

0

1

Updated 2026-02-06

Contributors are:

Who are from:

Tags

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Ch.4 Alignment - Foundations of Large Language Models

Related