Example

Example of Indexing Predicted Probabilities

Consider a batch containing 22 examples where a model outputs predicted probabilities over 33 classes, represented by the matrix y^=[0.10.30.60.30.20.5]\hat{\mathbf{y}} = \begin{bmatrix} 0.1 & 0.3 & 0.6 \\ 0.3 & 0.2 & 0.5 \end{bmatrix}. If the true class labels for these examples are 00 and 22 respectively, the label vector is y=[0,2]\mathbf{y} = [0, 2]. Using array indexing, we can efficiently extract the predicted probabilities corresponding to the true labels without writing a for-loop. For the first example (true class 00), we select the first element of the first row (0.10.1). For the second example (true class 22), we select the third element of the second row (0.50.5). The resulting array of selected probabilities is [0.1,0.5][0.1, 0.5].

0

1

Updated 2026-05-03

Contributors are:

Who are from:

Tags

D2L

Dive into Deep Learning @ D2L