Relation

Why Greedy layer-wise pretraining is called so?

  • Greedy layer-wise pretraining is called greedy as it optimizes each component of the final output network independently.

  • Each layer in the final network is optimized independently that is why this pretraining is called layer-wise.

  • It is called unsupervised since it uses unsupervised representation learning algorithm.

  • Pretraining helps to get a starting point for the weights of the network before actual joint training starts hence the name pretraining.

0

1

Updated 2021-08-05

References


Tags

Data Science