Concept

Greedy Layer-Wise Unsupervised Pretraining

  • Greedy layer-wise unsupervised pretraining helped researchers to train a supervised multi-layered deep neural network which did not incorporate complex deep neural network building blocks such as convolution or recurrence.

  • Each layer in a deep neural network is optimized in a greedy manner to obtain an optimized multi layered deep neural network at the end.

  • This type of pretraining requires a single-layer representation learning algorithms such as - RBM( Restricted Boltzmann Machine) - a single layer autoencoder - a sparse coding model

0

1

Updated 2025-08-28

Tags

Data Science