Learn Before
Multiple Choice

A development team is aligning a new large language model. Their sole strategy is to use a reward model that gives high scores for outputs that are factually accurate and verifiable. Why is this singular focus likely to result in an inadequately aligned model?

0

1

Updated 2025-10-02

Contributors are:

Who are from:

Tags

Ch.4 Alignment - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science