Analysis of Reward Function under Policy Convergence
In a language model alignment framework, the reward for generating a response y to a prompt x is given by the equation: where is the target policy, is the reference policy, is a positive constant, and is a normalization factor that depends only on the prompt x. Suppose that for a given prompt x, the target policy becomes identical to the reference policy for all possible responses (i.e., for every y). What does this imply about the reward for any response y? Explain your reasoning by analyzing the components of the equation.
0
1
Tags
Ch.4 Alignment - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
In a policy-based language model alignment process, the reward
r(x, y)for a responseyto a promptxis defined by the equation: whereπ_θis the target policy,π_θ_refis the reference policy,βis a positive scaling factor, andZ(x)is a normalization factor. If, for a specific responsey_1, the target policy assigns a lower probability than the reference policy (i.e.,π_θ(y_1|x) < π_θ_ref(y_1|x)), what is the direct consequence for the log-ratio component of the reward calculation?In a framework for aligning language models, a reward function is defined as: where is the target policy, is a reference policy, is a scaling factor, and is a normalization factor dependent on the prompt . Given two distinct responses, and , to the same prompt , which expression correctly represents the difference in their rewards, ?
Derivation of DPO Preference Probability from Policy Ratios
Analysis of Reward Function under Policy Convergence