Case Study

Evaluating a Reward Model Training Strategy

An AI development team is building a model to score the helpfulness of chatbot responses. To train this model, they create a dataset where each entry consists of a user's prompt, a single chatbot response, and a numerical 'helpfulness' score from 1 to 5 assigned by a human labeler. They train their model to predict this score for any given prompt-response pair. After training, they find the model struggles to consistently differentiate between a 'good' response (score 4) and a 'great' response (score 5).

Based on the principles of learning from human preferences, analyze the team's data collection and training approach. Identify a fundamental limitation of using absolute numerical scores in this way and propose a specific change to their data labeling process that would more effectively teach the model to discern fine-grained differences in response quality.

0

1

Updated 2025-10-05

Contributors are:

Who are from:

Tags

Ch.5 Inference - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Ch.2 Generative Models - Foundations of Large Language Models

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science