Case Study

Diagnosing Undesirable Model Behavior

A text-generation model is being trained using a feedback signal derived from two independent scoring systems: one measuring 'informativeness' (how detailed and factual the text is) and another measuring 'safety' (how free the text is from biased or inappropriate content). The final feedback score used to update the model is a simple, unweighted average of the scores from these two systems.

After training, evaluators observe that the model consistently produces highly informative text, but it also frequently generates unsafe content.

Analyze this situation. Why would a simple averaging of scores lead to this specific undesirable outcome, even when one of the scoring systems is correctly identifying unsafe content?

0

1

Updated 2025-09-26

Contributors are:

Who are from:

Tags

Ch.4 Alignment - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science