Multiple Choice

When comparing Direct Policy Optimization (DPO) with Proximal Policy Optimization (PPO), what is the primary consequence of DPO's foundational assumption that the reward and reference models are fixed throughout training?

0

1

Updated 2025-10-05

Contributors are:

Who are from:

Tags

Ch.4 Alignment - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science