Learn Before
Concept

Manual Implementation Learning Rate Decay

When training one model at a time, over a long period of time, we can manually decrease α\alpha whenever the learning rate slows down. This requires manually tuning the value of α\alpha by periodically checking it over the entire learning period and can be strenuous. So, this method works with training only a small number of models.

0

2

Updated 2020-11-16

Tags

Data Science