Relation

When to use greedy unsupervised pretraining

There are many instances where greedy unsupervised pretraining can be beneficial, but in other cases not so beneficial and can even be harmful. The most advantageous way to use greedy unsupervised pretraining is in the field of natural language processing, where one can pretrain once on a huge unlabeled set, learn a good representation, and then use the result for a supervised task. Greedy unsupervised pretraining is most successful when the number of unlabeled examples is either very large or very small.

0

1

Updated 2021-07-15

Tags

Data Science