Case Study

Reward Model Training Diagnosis

Given the case study and the reward model loss function below, explain why the data inconsistency is the likely cause for the model's failure to learn and for the high, stagnant training loss.

Loss Function: Lr(ϕ)=E(x,ya,yb)Dr[logPrϕ(yaybx)]\mathcal{L}_r(\phi) = -\mathbb{E}_{(\mathbf{x},\mathbf{y}_a,\mathbf{y}_b)\sim\mathcal{D}_r} [\log \text{Pr}_{\phi}(\mathbf{y}_a \succ \mathbf{y}_b|\mathbf{x})]

0

1

Updated 2025-10-04

Contributors are:

Who are from:

Tags

Ch.4 Alignment - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science