Prerequisite: ANN | Self Organizing Neural Network (SONN)
In the Self Organizing Neural Network (SONN), learning is performed by shifting the weights from inactive connections to active ones. The neurons which were won are selected to learn along with their neighborhood neurons. If a neuron does not respond for a specific input pattern, then learning will not be performed in that particular neuron.
Self-Organizing Neural Network Learning Algorithm:
- Initialize synaptic weights to random values in a specific interval like, [-1, 1] or [0, 1].
- Assign topological neighborhood parameters.
- Define learning rate (say, 0.1).
Step 1: Until termination condition is reached, do loop: Steps 2-8
This is a criteria to finding similarity between two sets of samples. The nodes (neurons) in the network are evaluated to determine the most likely input vector according to its weights
Where: is the number of neurons in the input layer,
is the number of neurons in the Kohonen layer.
The winning node is generally termed as the Best Matching Unit (BMU).
Where is the weight correction at iteration .
This process to update weight is based on the competitive learning rule:
Where is the learning rate.
Here the neighborhood function centered around the winner-takes-all neuron at iteration . Any neurons within the radius of the BMU are modified to make them more similar to the input vector.
Example with iterations:
Take an input vector of 2 – Dimension:
- The initial weight vectors, , are given by
- We find the winning (best-matching) neuron satisfying the minimum distance Euclidean criterion:
- Neuron 3 is the winner and its weight vector is updated following the competitive learning rule.
- The updated weight vector at iteration is calculated as:
- The weight vector of the winning neuron 3 becomes closer to the input vector in each iteration.
- ANN - Implementation of Self Organizing Neural Network (SONN) from Scratch
- ANN - Self Organizing Neural Network (SONN)
- ANN - Bidirectional Associative Memory (BAM) Learning Algorithm
- Introduction to ANN | Set 4 (Network Architectures)
- ML | Reinforcement Learning Algorithm : Python Implementation using Q-learning
- ML | Transfer Learning with Convolutional Neural Networks
- Neural Logic Reinforcement Learning - An Introduction
- Implementing Artificial Neural Network training process in Python
- Introduction to Convolution Neural Network
- Introduction to Artificial Neural Network | Set 2
- A single neuron neural network in Python
- Applying Convolutional Neural Network on mnist dataset
- Introduction to Recurrent Neural Network
- Importance of Convolutional Neural Network | ML
- Neural Network Advances
- ML - Neural Network Implementation in C++ From Scratch
- Choose optimal number of epochs to train a neural network in Keras
- Implementation of Artificial Neural Network for AND Logic Gate with 2-bit Binary Input
- Implementation of Artificial Neural Network for OR Logic Gate with 2-bit Binary Input
- Implementation of Artificial Neural Network for NAND Logic Gate with 2-bit Binary Input
If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to email@example.com. See your article appearing on the GeeksforGeeks main page and help other Geeks.
Please Improve this article if you find anything incorrect by clicking on the "Improve Article" button below.