Learn Before
Concept
Adaptive Overfitting
In machine learning, the mathematical guarantees of test set performance rely on the assumption that a classifier is chosen without any prior contact with the test dataset. If a subsequent model is designed after the modeler observes the test set performance of a prior model , information from the test dataset inevitably leaks to the modeler. Because of this leakage, the test dataset can no longer be viewed as being drawn randomly from the underlying population relative to the modeler's choices. This human-in-the-loop bias is known as adaptive overfitting and compromises the validity of subsequent test set evaluations.
0
1
Updated 2026-05-03
Tags
D2L
Dive into Deep Learning @ D2L