Bias and Variance in Deep Learning
One reason that people manipulate hyperparameters is to improve the bias and variance that a given model contains. Bias is the difference between the average prediction of our model and the correct value which we are trying to predict. Models with high bias pay little attention to the training data and oversimplify the model, which will lead to high errors on training and test sets. Variance is the variability of model prediction for a given data point or a value which tells us spread of our data. High Variance models do not generalize based on previous data much, if at all, and thus while it may perform well on training data it will not on test sets. Briefly, the Bias vs Variance scale is a scale between over- and under-fitting of a model. By manipulating various hyperparameters, you can change between over- and under-fitting to get the best predictions
0
2
Contributors are:
Who are from:
Tags
Data Science
Related
Bias and Variance in Deep Learning
Hyperparameter Tuning in Deep Learning
List of Common Hyperparameters in Deep Learning
Bias and Variance in Deep Learning
When an experienced deep learning engineer works on a new problem, they can usually use insight from previous problems to train a good model on the first try, without needing to iterate multiple times through different models. True/False?
Regularization