Formula

Finite Sample Bounds for Test Error via Hoeffding's Inequality

While asymptotic convergence provides intuition for how test error behaves as the dataset size approaches infinity, practical machine learning relies on finite datasets. Because the classification error is a bounded random variable (between 00 and 11), valid finite sample bounds can be established using Hoeffding's inequality. It guarantees that the probability of the empirical error ϵD(f)\epsilon_\mathcal{D}(f) exceeding the true error ϵ(f)\epsilon(f) by a margin of tt or more is exponentially bounded: P(ϵD(f)ϵ(f)t)<exp(2nt2)P(\epsilon_\mathcal{D}(f) - \epsilon(f) \geq t) < \exp\left( - 2n t^2 \right)

0

1

Updated 2026-05-03

Contributors are:

Who are from:

Tags

D2L

Dive into Deep Learning @ D2L