True/False

The primary reason Direct Policy Optimization (DPO) is considered more sample-efficient than Proximal Policy Optimization (PPO) is that DPO requires actively collecting new preference data from an online environment throughout its training process.

0

1

Updated 2025-10-10

Contributors are:

Who are from:

Tags

Ch.4 Alignment - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science