Learn Before
Multiple Choice

A neural network is being trained for a 3-class classification task (Classes A, B, C). For a single training example, the true label is 'Class B'. The model outputs the probability distribution P(A)=0.2, P(B)=0.5, P(C)=0.3. The loss for this example is calculated using the negative log-likelihood of the correct class, resulting in a loss of -log(0.5). This calculation is a direct application of the cross-entropy formula between the model's predicted distribution and the empirical distribution from the training data. What is the specific empirical probability distribution for this single training example?

0

1

Updated 2025-10-06

Contributors are:

Who are from:

Tags

Data Science

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science