Falsifiability of Machine Learning Models
In machine learning, model complexity can be analyzed through Karl Popper's scientific criterion of falsifiability. An overly complex model class that can perfectly fit any arbitrary dataset—including randomly assigned labels—acts like an unfalsifiable theory that explains every possible observation, failing to guarantee that a true pattern has been discovered. Conversely, if a model class is restrictive enough that it could not possibly fit arbitrary labels, yet it successfully fits the actual training data, we can more confidently conclude it has learned a generalizable pattern.
0
1
Tags
D2L
Dive into Deep Learning @ D2L
Related
Overfitting a supervised statistical model
Training Error and Test Error
Generalizability of a supervised statistical model
Underfitting a supervised statistical model
Measuring Model Complexity: Rademacher complexity
Bias of Supervised Models in Statistical Learning
Variance of Supervised Models in Statistical Learning
Falsifiability of Machine Learning Models
Notions of Model Complexity
Relationship Between Dataset Size and Model Complexity