Concept

Asymptotic Convergence Rate of Test Error

By the central limit theorem, as the size nn of a test dataset grows toward infinity, the empirical test error ϵD(f)\epsilon_\mathcal{D}(f) approaches the true population error ϵ(f)\epsilon(f) at an asymptotic convergence rate of O(1/n)\mathcal{O}(1/\sqrt{n}). This O(1/n)\mathcal{O}(1/\sqrt{n}) rate reveals that improving the precision of the error estimate is computationally expensive; for instance, estimating the test error twice as precisely requires collecting four times as many samples.

0

1

Updated 2026-05-03

Contributors are:

Who are from:

Tags

D2L

Dive into Deep Learning @ D2L