Learn Before
Multi-task Learning in Deep Learning
In multi-task learning, the goal is to try to have one neural network classify/predict multiple outputs at the same time, with each of the tasks helping in the execution of the other tasks, i.e., learning from multiple tasks. This contrasts with transfer learning where the tasks are done sequentially, with the previous task helping in the execution of the next.
1
2
Contributors are:
Who are from:
Tags
Data Science
Related
Example of Weight Initialization
Vanishing/exploding gradient
Symmetry Breaking in Deep Learning
How to Initialization Weights to Prevent Vanishing/Exploding Gradients
Transfer Learning in Deep Learning
Multi-task Learning in Deep Learning
Variance of Layer Output in Forward Propagation
Default Random Initialization
Xavier Initialization
Learn After
When Multi-Task Learning Doesn't Make Sense
An Overview of Multi-Task Learning in Deep Neural Networks
MTL Methods for Deep Learning
Implementing Multi-task Learning in Deep Learning
When Multi-task Learning in Deep Learning Makes Sense
Example of a Self-Driving Car for Multi-task Learning in Deep Learning