Learn Before
  • Bradley-Terry Model for Preference Probability

Interpreting Preference Model Output

A preference model calculates the probability that response y_a is preferred over response y_b for a given input x using the formula: Pr(y_a > y_b | x) = Sigmoid(r(x, y_a) - r(x, y_b)), where r(x, y) is a scalar reward score. If the model outputs a probability of 0.95 for Pr(y_a > y_b | x), what can you conclude about the relative values of the reward scores r(x, y_a) and r(x, y_b)? Explain your reasoning.

0

1

6 months ago

Contributors are:

Who are from:

Tags

Ch.4 Alignment - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Application in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science

Related
  • A preference model calculates the probability that response y_a is preferred over response y_b for a given input x using the formula: Pr(y_a > y_b | x) = Sigmoid(r(x, y_a) - r(x, y_b)), where r(x, y) is a real-valued score for a given response. Based on this model, which of the following statements accurately describes its behavior?

  • A preference model calculates the probability that a 'winning' response, y_w, is preferred over a 'losing' response, y_l, for a given input x. The model uses the formula: Pr(y_w > y_l | x) = Sigmoid(r(x, y_w) - r(x, y_l)), where r(x, y) is a scalar reward score. In a specific training example, the reward scores for the two responses are found to be nearly identical, i.e., r(x, y_w) ≈ r(x, y_l). What does this imply about the calculated preference probability?

  • Derivation of DPO Preference Probability from Policy Ratios

  • Interpreting Preference Model Output