Multiple Choice

An auto-regressive language model is being trained on a large text corpus. At one training step, the model processes the input 'The cat sat on the' and must predict the next token. The actual next token in the training data is 'mat'. Which of the following predicted probability distributions for the next token would result in the lowest cross-entropy loss?

0

1

Updated 2025-10-06

Contributors are:

Who are from:

Tags

Ch.1 Pre-training - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Application in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science