Learn Before
Calculating a Target Policy Distribution
A language model's behavior is being refined. The goal is to create a new target distribution, π*, by adjusting a reference distribution, π_ref, based on a reward score, r(x, y). The relationship is defined by the formula: where Z(x) is a normalization term ensuring the probabilities sum to 1. Given the scenario below, calculate the new probability for the output 'Bonjour' under the target distribution π*. Assume β = 1.0 for this calculation.
0
1
Tags
Ch.4 Alignment - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Application in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Derivation of the KL Divergence Objective for Policy Optimization
A language model's behavior is guided by a target probability distribution, π*, which is defined by re-weighting a reference distribution, π_ref, based on a reward score, r(x, y). The relationship is given by the formula: In this formula, β is a positive scalar parameter. Analyze the effect of significantly increasing the value of β. What is the most direct consequence for the target distribution π*?
Critique of a Modified Policy Formulation
Calculating a Target Policy Distribution