Learn Before
A startup is aligning a new AI financial advisor using preference feedback. The data is collected exclusively from a small, culturally uniform group of the company's own financial experts. Based on the known challenges of this alignment method, what is the most critical potential flaw in this approach?
0
1
Tags
Ch.4 Alignment - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
AI Feedback as an Alternative to Human Feedback
Evaluating an AI Alignment Strategy
A startup is aligning a new AI financial advisor using preference feedback. The data is collected exclusively from a small, culturally uniform group of the company's own financial experts. Based on the known challenges of this alignment method, what is the most critical potential flaw in this approach?
Critique of Human Feedback for Model Alignment