Multiple Choice

A research team is training a reward model for a chatbot designed to generate creative and humorous stories. They notice that human labelers are highly inconsistent when assigning absolute quality scores (e.g., on a 1-10 scale), as humor is very subjective. However, the labelers are much more consistent when asked to choose which of two stories is funnier. Given this situation, which training data approach would likely lead to a more effective and generalizable reward model, and why?

0

1

Updated 2025-10-05

Contributors are:

Who are from:

Tags

Ch.4 Alignment - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science