Learn Before
Greedy Layer-Wise Unsupervised Pretraining
-
Greedy layer-wise unsupervised pretraining helped researchers to train a supervised multi-layered deep neural network which did not incorporate complex deep neural network building blocks such as convolution or recurrence.
-
Each layer in a deep neural network is optimized in a greedy manner to obtain an optimized multi layered deep neural network at the end.
-
This type of pretraining requires a single-layer representation learning algorithms such as - RBM( Restricted Boltzmann Machine) - a single layer autoencoder - a sparse coding model
0
1
Tags
Data Science
Related
Rules-Based Systems vs. Classic Machine Learning vs. Representation Learning vs. Deep Learning
Methods of Feature Learning
Distributed Representations
Nondistributed Representations
How does Unsupervised Pretraining act as a regularizer?
Disadvantage of Pretraining
When to use greedy unsupervised pretraining
Greedy Layer-Wise Unsupervised Pretraining
Data Augmentation
Structured Prediction