Essay

Choosing and Justifying an RLHF Objective Under Competing Product Constraints

You are leading alignment fine-tuning for a customer-support LLM. You have (1) a dataset of human pairwise preferences for multiple candidate responses per prompt, and (2) a supervised fine-tuned (SFT) model that is already safe and on-brand but sometimes less helpful. After an initial RLHF run, stakeholders report two issues: the model is becoming noticeably more verbose and stylistically different from the SFT baseline, and training is sensitive—small hyperparameter changes cause large swings in behavior.

Write an essay that proposes a concrete RLHF training approach using a reward model and PPO, and justify your design choices by explicitly connecting: (a) how you would train the reward model from rankings, (b) how the policy-gradient objective with an advantage signal would shape token-level probability updates, and (c) how PPO’s stabilization mechanisms—especially a KL-divergence penalty to a frozen reference policy—should be set/tuned to balance “improve helpfulness” vs “stay close to the trusted baseline.”

Your answer must include at least one specific failure mode you expect if the KL penalty (or its coefficient β) is set too low and one failure mode if it is set too high, and explain how those failure modes would manifest in the PPO updates given the advantage-weighted log-probability objective.

0

1

Updated 2026-02-06

Contributors are:

Who are from:

Tags

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Ch.4 Alignment - Foundations of Large Language Models

Related