Recurrent Neural Network(RNN) are a type of Neural Network where the output from previous step are fed as input to the current step. In traditional neural networks, all the inputs and outputs are independent of each other, but in cases like when it is required to predict the next word of a sentence, the previous words are required and hence there is a need to remember the previous words. Thus RNN came into existence, which solved this issue with the help of a Hidden Layer. The main and most important feature of RNN is Hidden state, which remembers some information about a sequence.
RNN have a “memory” which remembers all information about what has been calculated. It uses the same parameters for each input as it performs the same task on all the inputs or hidden layers to produce the output. This reduces the complexity of parameters, unlike other neural networks.
How RNN works
The working of a RNN can be understood with the help of below example:
Suppose there is a deeper network with one input layer, three hidden layers and one output layer. Then like other neural networks, each hidden layer will have its own set of weights and biases, let’s say, for hidden layer 1 the weights and biases are (w1, b1), (w2, b2) for second hidden layer and (w3, b3) for third hidden layer. This means that each of these layers are independent of each other, i.e. they do not memorize the previous outputs.
Now the RNN will do the following:
- RNN converts the independent activations into dependent activations by providing the same weights and biases to all the layers, thus reducing the complexity of increasing parameters and memorizing each previous outputs by giving each output as input to the next hidden layer.
- Hence these three layers can be joined together such that the weights and bias of all the hidden layers is the same, into a single recurrent layer.
- Formula for calculating current state:
ht -> current state ht-1 -> previous state xt -> input state
- Formula for applying Activation function(tanh):
whh -> weight at recurrent neuron wxh -> weight at input neuron
- Formula for calculating output:
Yt -> output Why -> weight at output layer
Training through RNN
- A single time step of the input is provided to the network.
- Then calculate its current state using set of current input and the previous state.
- The current ht becomes ht-1 for the next time step.
- One can go as many time steps according to the problem and join the information from all the previous states.
- Once all the time steps are completed the final current state is used to calculate the output.
- The output is then compared to the actual output i.e the target output and the error is generated.
- The error is then back-propagated to the network to update the weights and hence the network (RNN) is trained.
Advantages of Recurrent Neural Network
- An RNN remembers each and every information through time. It is useful in time series prediction only because of the feature to remember previous inputs as well. This is called Long Short Term Memory.
- Recurrent neural network are even used with convolutional layers to extend the effective pixel neighborhood.
Disadvantages of Recurrent Neural Network
- Gradient vanishing and exploding problems.
- Training an RNN is a very difficult task.
- It cannot process very long sequences if using tanh or relu as an activation function.
- Recurrent Neural Networks Explanation
- Text Generation using Recurrent Long Short Term Memory Network
- Introduction to Convolution Neural Network
- Introduction to Artificial Neural Network | Set 2
- Gated Recurrent Unit Networks
- ML | Text Generation using Gated Recurrent Unit Networks
- Implementing Artificial Neural Network training process in Python
- A single neuron neural network in Python
- Applying Convolutional Neural Network on mnist dataset
- Importance of Convolutional Neural Network | ML
- Neural Network Advances
- ML - Neural Network Implementation in C++ From Scratch
- Choose optimal number of epochs to train a neural network in Keras
- Implementation of Artificial Neural Network for AND Logic Gate with 2-bit Binary Input
- Implementation of Artificial Neural Network for OR Logic Gate with 2-bit Binary Input
- Implementation of Artificial Neural Network for NAND Logic Gate with 2-bit Binary Input
- Implementation of Artificial Neural Network for NOR Logic Gate with 2-bit Binary Input
- Implementation of Artificial Neural Network for XOR Logic Gate with 2-bit Binary Input
- Implementation of Artificial Neural Network for XNOR Logic Gate with 2-bit Binary Input
- Implementation of neural network from scratch using NumPy
If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to email@example.com. See your article appearing on the GeeksforGeeks main page and help other Geeks.
Please Improve this article if you find anything incorrect by clicking on the "Improve Article" button below.