Concept

Falsifiability of Machine Learning Models

In machine learning, model complexity can be analyzed through Karl Popper's scientific criterion of falsifiability. An overly complex model class that can perfectly fit any arbitrary dataset—including randomly assigned labels—acts like an unfalsifiable theory that explains every possible observation, failing to guarantee that a true pattern has been discovered. Conversely, if a model class is restrictive enough that it could not possibly fit arbitrary labels, yet it successfully fits the actual training data, we can more confidently conclude it has learned a generalizable pattern.

0

1

Updated 2026-05-03

Contributors are:

Who are from:

Tags

D2L

Dive into Deep Learning @ D2L