Introduction to Multi-Task Learning(MTL) for Deep Learning

Multi-Task Learning is a sub-field of Deep Learning. It is recommended that you familiarize yourself with the concepts of neural networks to understand what multi-task learning means.

What is Multi-Task Learning?

Multi-Task learning is a sub-field of Machine Learning that aims to solve multiple different tasks at the same time, by taking advantage of the similarities between different tasks. This can improve the learning efficiency and also act as a regularizer which we will discuss in a while.
Formally, if there are n tasks (conventional deep learning approaches aim to solve just 1 task using 1 particular model), where these n tasks or a subset of them are related to each other but not exactly identical, Multi-Task Learning (MTL) will help in improving the learning of a particular model by using the knowledge contained in all the n tasks.

Intuition behind Multi-Task Learning (MTL):
By using Deep learning models, we usually aim to learn a good representation of the features or attributes of the input data to predict a specific value. Formally, we aim to optimize for a particular function by training a model and fine-tuning the hyperparameters till the performance can’t be increased further.
By using MTL, it might be possible to increase performance even further by forcing the model to learn a more generalized representation as it learns (updates its weights) not just for one specific task but a bunch of tasks.
Biologically, humans learn in the same way. We learn better if we learn multiple related tasks instead of focusing on one specific task for a long time.

MTL as a regularizer:

In the lingo of Machine Learning, MTL can also be looked at as a way of inducing bias. It is a form of inductive transfer, using multiple tasks induces a bias that prefers hypotheses that can explain all the n tasks.
MTL acts as a regularizer by introducing inductive bias as stated above. It significantly reduces the risk of overfitting and also reduces the model’s ability to accommodate random noise during training.

Now, let’s discuss the major and prevalent techniques to use MTL.

Hard Parameter Sharing –
A common hidden layer is used for all tasks but several task specific layers are kept intact towards the end of the model. This technique is very useful as by learning a representation for various tasks by a common hidden layer, we reduce the risk of overfitting.

Hard Parameter Sharing

Soft Parameter Sharing –
Each model has their own sets of weights and biases and the distance between these parameters in different models is regularized so that the parameters become similar and can represent all the tasks.

Soft Parameter Sharing

Assumptions and Considerations – Using MTL to share knowledge among tasks are very useful only when the tasks are very similar, but when this assumption is violated, the performance will significantly decline.

Applications:
MTL techniques have found various uses, some of the major applications are-

  • Object detection and Facial recognition
  • Self Driving Cars: Pedestrians, stop signs and other obstacles can be detected together
  • Multi-domain collaborative filtering for web applications
  • Stock Prediction
  • Language Modelling and other NLP applications

Reference: An overview of multi-task learning



My Personal Notes arrow_drop_up

Check out this Author's contributed articles.

If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.

Please Improve this article if you find anything incorrect by clicking on the "Improve Article" button below.