Learn Before
How does Unsupervised Pretraining act as a regularizer?
• Pretraining encourages the learning algorithm to discover features that relate to the underlying causes that generate the observed data. • However, compared to other forms of unsupervised learning, unsupervised pretraining has the disadvantage of operating with two separate training phases.
0
1
Contributors are:
Who are from:
Tags
Data Science
Related
Rules-Based Systems vs. Classic Machine Learning vs. Representation Learning vs. Deep Learning
Methods of Feature Learning
Distributed Representations
Nondistributed Representations
How does Unsupervised Pretraining act as a regularizer?
Disadvantage of Pretraining
When to use greedy unsupervised pretraining
Greedy Layer-Wise Unsupervised Pretraining
Data Augmentation
Structured Prediction