Multiple Choice

During the policy optimization stage of training a large language model, an engineer observes that the model's outputs are coherent and safe, but they show very little improvement over the initial supervised fine-tuned version and consistently receive mediocre scores from the reward model. Which of the following is the most likely cause of this issue, based on the policy optimization objective function that balances maximizing rewards with a penalty for policy divergence?

0

1

Updated 2025-09-28

Contributors are:

Who are from:

Tags

Ch.4 Alignment - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science