Learn Before
When to use greedy unsupervised pretraining
There are many instances where greedy unsupervised pretraining can be beneficial, but in other cases not so beneficial and can even be harmful. The most advantageous way to use greedy unsupervised pretraining is in the field of natural language processing, where one can pretrain once on a huge unlabeled set, learn a good representation, and then use the result for a supervised task. Greedy unsupervised pretraining is most successful when the number of unlabeled examples is either very large or very small.
0
1
Tags
Data Science
Related
Rules-Based Systems vs. Classic Machine Learning vs. Representation Learning vs. Deep Learning
Methods of Feature Learning
Distributed Representations
Nondistributed Representations
How does Unsupervised Pretraining act as a regularizer?
Disadvantage of Pretraining
When to use greedy unsupervised pretraining
Greedy Layer-Wise Unsupervised Pretraining
Data Augmentation
Structured Prediction