Concept

Indexing Predicted Probabilities for Cross-Entropy Loss

When calculating the cross-entropy loss for a batch of examples, it is computationally inefficient to use a for-loop to iterate over each example to evaluate the negative log-likelihood. Instead, advanced array indexing can be used to extract the model's predicted probability assigned to the true label for each example. Because the true labels y\mathbf{y} are typically provided as a vector of integer class indices, these indices can directly select the corresponding predicted probabilities from the prediction matrix y^\hat{\mathbf{y}}. This efficiently bypasses the need for explicitly multiplying a one-hot encoded label matrix by the predictions.

0

1

Updated 2026-05-03

Contributors are:

Who are from:

Tags

D2L

Dive into Deep Learning @ D2L