Learn Before
A development team is refining a language model's ability to generate summaries. For each source document, they have the model produce two different summaries. They then present these two summaries side-by-side to a human annotator and ask them to select the one that is of higher quality. Which statement best analyzes the primary strength of this specific approach for collecting human feedback?
0
1
Tags
Ch.4 Alignment - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Evaluation Criteria for Pairwise Comparison in RLHF
Bradley-Terry Model
Reward Model Training as a Ranking Problem in RLHF
Listwise Ranking for Human Feedback in RLHF
Importance of Variability in Pairwise Preference Data
Evaluating a Feedback Collection Strategy
A development team is refining a language model's ability to generate summaries. For each source document, they have the model produce two different summaries. They then present these two summaries side-by-side to a human annotator and ask them to select the one that is of higher quality. Which statement best analyzes the primary strength of this specific approach for collecting human feedback?
Rationale for a Feedback Collection Method
Binary Encoding of Pairwise Feedback in RLHF