Skip to content
Related Articles

Related Articles

Improve Article

Dropout in Neural Networks

  • Difficulty Level : Hard
  • Last Updated : 16 Jul, 2020

The concept of Neural Networks is inspired by the neurons in the human brain and scientists wanted a machine to replicate the same process. This craved a path to one of the most important topics in Artificial Intelligence. A Neural Network (NN) is based on a collection of connected units or nodes called artificial neurons, which loosely model the neurons in a biological brain. Since such a network is created artificially in machines, we refer to that as Artificial Neural Networks (ANN). This article assumes that you have a decent knowledge of ANN. More about ANN can be found here.
Now, let us go narrower into the details of Dropout in ANN.


When a fully-connected layer has a large number of neurons, co-adaption is more likely to happen. Co-adaptation refers to when multiple neurons in a layer extract the same, or very similar, hidden features from the input data. This can happen when the connection weights for two different neurons are nearly identical.

An example of co-adaptation between neurons A and B. Due to identical weights, A and B will pass the same value into C.

This poses two different problems to our model:

  • Wastage of machine’s resources when computing the same output.
  • If many neurons are extracting the same features, it adds more significance to those features for our model. This leads to overfitting if the duplicate extracted features are specific to only the training set.

Solution to the problem:

As the title suggests, we use dropout while training the NN to minimize co-adaption.
In dropout, we randomly shut down some fraction of a layer’s neurons at each training step by zeroing out the neuron values. The fraction of neurons to be zeroed out is known as the dropout rate,   r_{d} . The remaining neurons have their values multiplied by  \frac{1}{1 - r_d} so that the overall sum of the neuron values remains the same.

The two images represent dropout applied to a layer of 6 units, shown at multiple training steps. The dropout rate is 1/3, and the remaining 4 neurons at each training step have their value scaled by x1.5. Thereby, we are choosing a random sample of neurons rather than training the whole network at once. This ensures that the co-adaption is solved and they learn the hidden features better.

Dropout can be applied to a network using TensorFlow APIs as,

# rate: Float between 0 and 1. 
# The fraction of the input units to drop.

Attention reader! Don’t stop learning now. Get hold of all the important Machine Learning Concepts with the Machine Learning Foundation Course at a student-friendly price and become industry ready.

My Personal Notes arrow_drop_up
Recommended Articles
Page :