Learn Before
Case Study

Choosing an Alignment Strategy

A research lab has a fixed, high-quality dataset of 50,000 prompts, each with a human-preferred response and a human-rejected response. Their primary goal is to align their language model to these preferences as efficiently as possible due to a tight computational budget. They are debating between two methods:

  • Method A: First, train a separate reward model on the preference dataset. Then, use this reward model in an online reinforcement learning loop to fine-tune the language model policy, generating new samples at each step.
  • Method B: Use the preference dataset directly to fine-tune the language model policy in a single stage, using a loss function that aims to increase the probability of the preferred responses while decreasing the probability of the rejected ones.

Based on the lab's primary goals and available resources, which method should they choose? Justify your decision by evaluating the trade-offs of both methods in the context of this scenario.

0

1

Updated 2025-10-06

Contributors are:

Who are from:

Tags

Ch.4 Alignment - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Evaluation in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science

Related