Short Answer

Effect of Temperature Scaling on a Reward-Modified Distribution

An AI text generation model adjusts its output probabilities using the formula: Final_Prob = Ref_Prob * exp((1/β) * Reward), where 'Ref_Prob' is the initial probability from a base model, 'Reward' is a score for a specific quality (e.g., factual accuracy), and 'β' is a positive temperature parameter. A developer decreases the value of 'β' and observes that the model's outputs now adhere much more strictly to the rewarded quality, but have become less diverse and creative. Explain the mathematical reason for this change in the model's behavior.

0

1

Updated 2025-10-08

Contributors are:

Who are from:

Tags

Ch.4 Alignment - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science