Learn Before
Selecting a Baseline for Policy Optimization
Which of the two models—the original base pre-trained model or the fine-tuned dialogue model—should the team use as the fixed reference policy during the preference alignment phase? Justify your choice by explaining the role this reference policy plays in ensuring training stability and maintaining response quality.
0
1
Tags
Ch.4 Alignment - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Application in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
An AI development team is refining a pre-trained language model using a dataset of human preferences, where each example consists of a prompt, a preferred response, and a rejected response. As training progresses, they notice that while the model is learning to generate responses that align with the preferences, its general language quality is deteriorating; it produces more repetitive and nonsensical text. What is the most probable cause of this issue related to the optimization objective's design?
Choosing a Baseline for Preference Alignment
Selecting a Baseline for Policy Optimization
Conceptual Objective Function Assumed in DPO