Theory

Central Limit Theorem

The Central Limit Theorem is a foundational mathematical principle in probability theory stating that the average of nn independent random samples drawn from any distribution (with mean μ\mu and standard deviation σ\sigma) will approximately follow a normal distribution centered at the true mean μ\mu with a standard deviation of σ/n\sigma/\sqrt{n}, as the sample size nn grows large. In machine learning, this theorem explains why the empirical error of a classifier evaluated on a test set converges to its true population error at an asymptotic rate of O(1/n)\mathcal{O}(1/\sqrt{n}).

0

1

Updated 2026-05-03

Tags

Data Science

D2L

Dive into Deep Learning @ D2L