Multiple Choice

A machine learning engineer is training a language model on a text corpus. During training, they plot two values at each step:

  1. The average negative log-likelihood of the target sequences.
  2. The cross-entropy loss between the model's predicted probability distributions and the one-hot encoded target tokens.

The engineer observes that the two plots are identical. Which of the following statements provides the most accurate mathematical justification for this observation?

0

1

Updated 2025-09-28

Contributors are:

Who are from:

Tags

Ch.4 Alignment - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science