Learn Before
A development team is training a large language model to be a helpful assistant. Their process involves two stages:
- They train a 'scoring model' on a dataset of human-ranked conversations. The goal of this scoring model is to predict which of two responses a human would prefer, assigning a numerical score.
- They then use this scoring model to automatically provide feedback to the main language model, rewarding it for generating responses that receive a high score.
After extensive training using this method, the team observes that the main language model produces responses that are consistently very long and use excessively polite and elaborate phrasing, even when a short, direct answer would be more helpful. These long, polite responses always receive very high scores from the scoring model.
Which of the following statements best evaluates the fundamental issue with this training setup?
0
1
Tags
Ch.2 Generative Models - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Ch.4 Alignment - Foundations of Large Language Models
Evaluation in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
A development team is training a large language model to be a helpful assistant. Their process involves two stages:
- They train a 'scoring model' on a dataset of human-ranked conversations. The goal of this scoring model is to predict which of two responses a human would prefer, assigning a numerical score.
- They then use this scoring model to automatically provide feedback to the main language model, rewarding it for generating responses that receive a high score.
After extensive training using this method, the team observes that the main language model produces responses that are consistently very long and use excessively polite and elaborate phrasing, even when a short, direct answer would be more helpful. These long, polite responses always receive very high scores from the scoring model.
Which of the following statements best evaluates the fundamental issue with this training setup?
The Role of the Reward Model in Scalable Training
In the process of fine-tuning a language model using feedback, the problem is often framed using concepts from a general learning paradigm. Match each component from this general paradigm to its specific implementation in the language model fine-tuning process.