Learn Before
Reward Model Training Diagnosis
Given the case study and the reward model loss function below, explain why the data inconsistency is the likely cause for the model's failure to learn and for the high, stagnant training loss.
Loss Function:
0
1
Tags
Ch.4 Alignment - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Pair-wise Ranking Loss Formula for RLHF Reward Model
Empirical Reward Model Loss Formula using Bradley-Terry Model
A reward model is trained to learn human preferences by minimizing the following loss function, which is an expectation over a preference dataset :
In this dataset, represents a response preferred over response for a given input . What is the primary effect of successfully minimizing this loss function on the model's behavior?
Reward Model Training Diagnosis
Composition of Reward Model Parameters (ϕ)
Approximating Expected Loss with Empirical Loss
Empirical Reward Model Loss Formula
Impact of Prediction Confidence on Reward Model Loss