Short Answer

Correcting a Reward Model's Preference Error

Imagine a reward model is being trained on human preference data. For a given prompt, a human has indicated that Response A is better than Response B. However, the model currently assigns a higher score to Response B than to Response A. Describe the mechanism by which minimizing the ranking loss during a training step corrects this specific error. How does the loss calculation influence the adjustment of the model's internal parameters?

0

1

Updated 2025-09-28

Contributors are:

Who are from:

Tags

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Ch.4 Alignment - Foundations of Large Language Models

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science