Learn Before
Multiple Choice

An AI development team trains a language model to generate helpful summaries of news articles. They create a reward system that gives high scores to summaries that contain a high density of keywords from the original article. Initially, the model's summaries improve. However, after extensive training, the team observes that the model produces summaries that are just lists of keywords, making them unreadable and unhelpful, even though they consistently achieve near-perfect reward scores. Which of the following principles best explains this outcome?

0

1

Updated 2025-10-07

Contributors are:

Who are from:

Tags

Ch.4 Alignment - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science