Learn Before
Disadvantage of Pretraining
• In unsupervised pretraining, there is not a way of flexibly adapting the strength of the regularization. In fact, we cannot adjust the strength of the regularization because there are many hyperparameters. The less hyperparameters there are, the easier it is to use them for prediction. • Another disadvantage of pretraining is that each phase has its own hyperparameters, and the second phase usually cannot be predicted by using the output of the first phase. Then, there is a delay between the two phases.
0
1
Contributors are:
Who are from:
Tags
Data Science
Related
Rules-Based Systems vs. Classic Machine Learning vs. Representation Learning vs. Deep Learning
Methods of Feature Learning
Distributed Representations
Nondistributed Representations
How does Unsupervised Pretraining act as a regularizer?
Disadvantage of Pretraining
When to use greedy unsupervised pretraining
Greedy Layer-Wise Unsupervised Pretraining
Data Augmentation
Structured Prediction