Learn Before
Critique of a Modified Policy Formulation
In reinforcement learning from human feedback, a target policy π* is often defined by re-weighting a reference policy π_ref based on a reward r(x, y), as shown in the equation: A researcher proposes a simplification by removing the reference policy term entirely, creating a new target: Evaluate this proposed simplification. Discuss one potential advantage and two significant disadvantages of using this new formulation to guide a language model's learning process.
0
1
Tags
Ch.4 Alignment - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Evaluation in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Derivation of the KL Divergence Objective for Policy Optimization
A language model's behavior is guided by a target probability distribution, π*, which is defined by re-weighting a reference distribution, π_ref, based on a reward score, r(x, y). The relationship is given by the formula: In this formula, β is a positive scalar parameter. Analyze the effect of significantly increasing the value of β. What is the most direct consequence for the target distribution π*?
Critique of a Modified Policy Formulation
Calculating a Target Policy Distribution