Regularization
Regularization is a modification made to a learning algorithm with the goal of reducing generalization error, but not training error.
Linear Regression is useful for modeling outputs whose relation to their inputs is linear. However, linear regression performs poorly when trying to model non-linear relationships. We can use regularization to help learning algorithms select certain models over others, particularly to avoid overfitting.
0
2
Tags
Data Science
D2L
Dive into Deep Learning @ D2L
Related
Bias and Variance in Deep Learning
When an experienced deep learning engineer works on a new problem, they can usually use insight from previous problems to train a good model on the first try, without needing to iterate multiple times through different models. True/False?
Regularization
Regularization
Reducing Overfitting with Different Strengths
Learn After
Why does regularization prevent overfitting?
Popular Regularization Techniques in Deep Learning
Human Level Performance: Based on the evidence below, which two of the following four options seem the most promising to try?
Local Constancy and Smoothness Priors
Parameter Sharing
Parameter Tying
L1 regularization and L2 regularization
MTL as a Regularizer
Parameter Penalties in Classical Regularization