Types of Recurrent Neural Networks (RNN) in Tensorflow
Recurrent neural network (RNN) is more like Artificial Neural Networks (ANN) that are mostly employed in speech recognition and natural language processing (NLP). Deep learning and the construction of models that mimic the activity of neurons in the human brain uses RNN.
Text, genomes, handwriting, the spoken word, and numerical time series data from sensors, stock markets, and government agencies are examples of data that recurrent networks are meant to identify patterns in. A recurrent neural network resembles a regular neural network with the addition of a memory state to the neurons. A simple memory will be included in the computation.
Recurrent neural networks are a form of deep learning method that uses a sequential approach. We always assume that each input and output in a neural network is reliant on all other levels. Recurrent neural networks are so named because they perform mathematical computations in consecutive order.
Types of RNN :
1. One-to-One RNN:
The above diagram represents the structure of the Vanilla Neural Network. It is used to solve general machine learning problems that have only one input and output.
Example: classification of images.
2. One-to-Many RNN:
A single input and several outputs describe a one-to-many Recurrent Neural Network. The above diagram is an example of this.
Example: The image is sent into Image Captioning, which generates a sentence of words.
3. Many-to-One RNN:
This RNN creates a single output from the given series of inputs.
Example: Sentiment analysis is one of the examples of this type of network, in which a text is identified as expressing positive or negative feelings.
4. Many-to-Many RNN:
This RNN receives a set of inputs and produces a set of outputs.
Example: Machine Translation, in which the RNN scans any English text and then converts it to French.
Advantages of RNN :
- RNN may represent a set of data in such a way that each sample is assumed to be reliant on the previous one.
- To extend the active pixel neighbourhood, a Recurrent Neural Network is combined with convolutional layers.
Disadvantages of RNN :
- RNN training is a difficult process.
- If it is using tanh or ReLu like activation function, it wouldn’t be able to handle very lengthy sequences.
- The Vanishing or Exploding Gradient problem in RNN
Please Login to comment...