Learn Before
A team is fine-tuning a language model using a reinforcement learning process guided by human feedback. They observe that while the model's policy is successfully optimized to achieve high scores from its internal reward signal, the generated text is often repetitive, nonsensical, and misaligned with the original human preferences. Which of the following is the most likely cause of this discrepancy?
0
1
Tags
Ch.4 Alignment - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
A language model is being fine-tuned using a reinforcement learning approach that incorporates human feedback. Arrange the following key stages of this process into the correct chronological order.
A team is fine-tuning a language model using a reinforcement learning process guided by human feedback. They observe that while the model's policy is successfully optimized to achieve high scores from its internal reward signal, the generated text is often repetitive, nonsensical, and misaligned with the original human preferences. Which of the following is the most likely cause of this discrepancy?
Applying a Preference Model for AI Fine-Tuning