Example

Sample Complexity for Error Estimation

To calculate the dataset size required to estimate the population error ϵ(f)\epsilon(f) within a 95%95\% confidence interval of ±0.01\pm 0.01, we can compare asymptotic analysis with finite sample bounds. Asymptotic analysis suggests that roughly 10,00010,000 samples are needed to achieve this confidence level. In contrast, applying Hoeffding's inequality provides a more conservative, valid finite sample guarantee, demonstrating that approximately 15,00015,000 examples are required. This sample complexity scale aligns with the test set sizes of many popular machine learning benchmarks.

0

1

Updated 2026-05-03

Contributors are:

Who are from:

Tags

D2L

Dive into Deep Learning @ D2L