Artificial Neural Network (ANN):
Artificial Neural Network (ANN), is a group of multiple perceptrons or neurons at each layer. ANN is also known as a Feed-Forward Neural network because inputs are processed only in the forward direction.
This type of neural networks are one of the simplest variants of neural networks. They pass information in one direction, through various input nodes, until it makes it to the output node. The network may or may not have hidden node layers, making their functioning more interpretable.
- Storing information on the entire network.
- Ability to work with incomplete knowledge.
- Having fault tolerance.
- Having a distributed memory.
- Hardware dependence.
- Unexplained behavior of the network.
- Determination of proper network structure.
Convolutional Neural Network (CNN):
Convolutional neural networks (CNN) are one of the most popular models used today. This neural network computational model uses a variation of multilayer perceptrons and contains one or more convolutional layers that can be either entirely connected or pooled. These convolutional layers create feature maps that record a region of image which is ultimately broken into rectangles and sent out for nonlinear processing.
- Very High accuracy in image recognition problems.
- Automatically detects the important features without any human supervision.
- Weight sharing.
- CNN do not encode the position and orientation of object.
- Lack of ability to be spatially invariant to the input data.
- Lots of training data is required.
Recurrent Neural Network (RNN):
Recurrent neural networks (RNN) are more complex. They save the output of processing nodes and feed the result back into the model (they did not pass the information in one direction only). This is how the model is said to learn to predict the outcome of a layer. Each node in the RNN model acts as a memory cell, continuing the computation and implementation of operations. If the network’s prediction is incorrect, then the system self-learns and continues working towards the correct prediction during backpropagation.
- An RNN remembers each and every information through time. It is useful in time series prediction only because of the feature to remember previous inputs as well. This is called Long Short Term Memory.
- Recurrent neural network are even used with convolutional layers to extend the effective pixel neighborhood.
- Gradient vanishing and exploding problems.
- Training an RNN is a very difficult task.
- It cannot process very long sequences if using tanh or relu as an activation function.
Summation of all three networks in single table:
|Type of Data||Tabular Data, Text Data||Image Data||Sequence data|
|Fixed Length input||Yes||Yes||No|
|Vanishing and Exploding Gradient||Yes||Yes||Yes|
|Performance||ANN is considered to be less powerful than CNN, RNN.||CNN is considered to be more powerful than ANN, RNN.||RNN includes less feature compatibility when compared to CNN.|
|Application||Facial recognition and Computer vision.||Facial recognition, text digitization and Natural language processing.||Text-to-speech conversions.|
|Main advantages||Having fault tolerance, Ability to work with incomplete knowledge.||High accuracy in image recognition problems, Weight sharing.||Remembers each and every information, Time series prediction.|
|Disadvantages||Hardware dependence, Unexplained behavior of the network.||Large training data needed, don’t encode the position and orientation of object.||Gradient vanishing, exploding gradient.|
This article is contributed by Abhishek Gupta. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.