Essay

Evaluating Reference Model Selection in Reward-Based Training

In a reward-based training process for a language model, a fixed 'reference model' is used to regularize the policy updates, preventing the main model from deviating too drastically from a known, stable distribution. Evaluate the trade-offs involved in choosing this reference model. Specifically, compare the potential outcomes of using the initial, pre-trained base model versus using a model that has already undergone some initial instruction-based fine-tuning as the reference.

0

1

Updated 2025-10-06

Contributors are:

Who are from:

Tags

Ch.4 Alignment - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Evaluation in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science