Learn Before
Concept
When Multi-task Learning in Deep Learning Makes Sense
- Training on a set of tasks that could benefit from having shared low-level features: For example, recognizing traffic lights, cars, and pedestrians should have similar features that could help recognize stop signs, because these are all features of roads. This would save time and result in better performance.
- The amount of data you have for each task is similar: However, this rule may not be always valid. For example, we had 4 tasks in the self-driving example and each of these tasks has 100 examples. Now because the learning is simultaneous, each task does not need to solely depend on its own examples and can use some of the 300 other examples from the other tasks. When the number of tasks, is increased to, say 100, each task has potentially 9900 examples, 100 for each of the 99 other tasks to draw from.
- Can train a big enough neural network to do well on all the tasks: The alternative would be to train a separate neural network for each task. So rather than training one neural network for detecting all the pedestrians, cars, stop signs, and traffic lights, you could have trained a neural network for each of the tasks. But if some of the earlier features in neural network can be shared between these different types of objects, then training one neural network to do four things results in better performance than training four neural networks to do the four tasks separately.
0
1
Updated 2021-04-07
Contributors are:
Who are from:
Tags
Data Science
Related
When Multi-Task Learning Doesn't Make Sense
An Overview of Multi-Task Learning in Deep Neural Networks
MTL Methods for Deep Learning
Implementing Multi-task Learning in Deep Learning
When Multi-task Learning in Deep Learning Makes Sense
Example of a Self-Driving Car for Multi-task Learning in Deep Learning