Multiple Choice

A data scientist trains two language models, Model Alpha and Model Beta, on the same large corpus of text. After training, Model Alpha consistently achieves a lower cross-entropy loss on unseen test data compared to Model Beta. Considering the principle that better prediction is equivalent to better compression, what is the most accurate interpretation of this result?

0

1

Updated 2025-09-26

Contributors are:

Who are from:

Tags

Ch.3 Prompting - Foundations of Large Language Models

Foundations of Large Language Models

Computing Sciences

Foundations of Large Language Models Course

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science