Learn Before
In the context of fine-tuning a language model with reinforcement learning, the optimization objective often includes a penalty term that measures the divergence from an initial reference policy. What is the most critical trade-off this penalty term is designed to manage?
0
1
Tags
Ch.4 Alignment - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Parameter Update at the Reference Policy Point in PPO
PPO Objective Formula for LLM Training in RLHF
Diagnosing Issues in LLM Reinforcement Learning
In the context of fine-tuning a language model with reinforcement learning, the optimization objective often includes a penalty term that measures the divergence from an initial reference policy. What is the most critical trade-off this penalty term is designed to manage?
In the context of fine-tuning a language model with reinforcement learning, the optimization objective is composed of several key elements. Match each element with its primary function in the training process.