Learn Before
A language model generates two different summaries for a given article: Summary 1 and Summary 2. A human evaluator is tasked with reviewing them and determines that Summary 1 is more coherent and factually accurate than Summary 2. How would this specific judgment be formally expressed using standard preference notation?
0
1
Tags
Ch.4 Alignment - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Application in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Example of a Human Preference Ranking in RLHF
Ranked Preference Notation
Example of Listwise Ranking in RLHF
A language model generates two different summaries for a given article: Summary 1 and Summary 2. A human evaluator is tasked with reviewing them and determines that Summary 1 is more coherent and factually accurate than Summary 2. How would this specific judgment be formally expressed using standard preference notation?
A human annotator provides the following judgments for four text completions (C1, C2, C3, C4) generated in response to a single prompt: C1 ≻ C4, C4 ≻ C2, and C2 ≻ C3. Based on this information, arrange the completions in order from most preferred to least preferred.
Limitations of Preference Notation