Learn Before
Concept

Cross-entropy loss

The cross-entropy loss function works very well for models that predict binary classes (aka the output is between 0 and 1). It is defined as -[y*log(y-hat) +(1-y)*log(1-(y-hat))]. If y=0 the left side of the function is dropped and the right side, -log(1-(y-hat)), is used. Otherwise if y=1 the right side of the function is dropped and it uses -log(y-hat). In both instances this loss function encourages probabilities that are close to the true probability.

0

1

Updated 2026-04-15

Tags

Data Science

Foundations of Large Language Models Course

Computing Sciences

Related