Open In App

The Lottery Ticket Hypothesis

The Lottery Ticket Hypothesis has been presented in the form of a research paper at ICLR 2019 by MIT-IBM Watson AI Lab. This paper has been awarded the Best Paper Award in ICLR 2019.

Background: Network Pruning
Pruning basically means reducing the extent of a neural network by removing superfluous and unwanted parts. Network Pruning is a commonly used practise to reduce the size, storage and computational space occupied by a neural network. Like – Fitting an entire neural network in your phone. The idea of Network Pruning was originated in the 1990s which was later popularized in 2015.
 
How do you “prune” a neural network?
We can summarize the process of pruning into 4 major steps:



  1. Train the Network
  2. Remove superfluous structures
  3. Fine-tune the network
  4. Optionally : Repeat the Step 2 and 3 iteratively

But, before we further move ahead, you must know :

  1. Randomly initialize the full network
  2. Train it and prune superfluous structure
  3. Reset each remaining weight to its value after Step 1.

This basically suggests that “There exists a subnetwork that exists inside a randomly-initialized deep neural network which when trained in isolation can match or even outperform the accuracy of the original network.
 
Advantages of Trained Pruned Networks

 
Further Scope of Research

 
Link to the research paper: The lottery ticket hypothesis: Finding sparse, trainable neural networks

Article Tags :