Concept

Classification Error as a Bernoulli Random Variable

In the context of estimating classification error, the event of a classifier ff making an error on a single instance, 1(f(X)eqY)\mathbf{1}(f(X) eq Y), can be modeled as a Bernoulli random variable. It takes the value 11 (error) with probability equal to the true population error rate ϵ(f)\epsilon(f), and 00 (correct) otherwise. Consequently, its variance is ϵ(f)(1ϵ(f))\epsilon(f)(1-\epsilon(f)), which reaches its maximum when the true error rate is exactly 0.50.5 and decreases as the error approaches 00 or 11. This implies that the asymptotic standard deviation of the empirical error estimate cannot exceed 0.25/n\sqrt{0.25/n} for a sample size nn.

0

1

Updated 2026-05-03

Contributors are:

Who are from:

Tags

D2L

Dive into Deep Learning @ D2L