Match each reward model training approach with the description that best fits its methodology and a key implication of its use.
0
1
Tags
Ch.4 Alignment - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Choosing a Feedback Method for a Reward Model
A research team is training a reward model for a chatbot designed to generate creative and humorous stories. They notice that human labelers are highly inconsistent when assigning absolute quality scores (e.g., on a 1-10 scale), as humor is very subjective. However, the labelers are much more consistent when asked to choose which of two stories is funnier. Given this situation, which training data approach would likely lead to a more effective and generalizable reward model, and why?
Match each reward model training approach with the description that best fits its methodology and a key implication of its use.