Open In App

ANN – Self Organizing Neural Network (SONN) Learning Algorithm

Improve
Improve
Like Article
Like
Save
Share
Report
Prerequisite: ANN | Self Organizing Neural Network (SONN) In the Self Organizing Neural Network (SONN), learning is performed by shifting the weights from inactive connections to active ones. The neurons which were won are selected to learn along with their neighborhood neurons. If a neuron does not respond for a specific input pattern, then learning will not be performed in that particular neuron. Self-Organizing Neural Network Learning Algorithm: Step 0:
  • Initialize synaptic weights $W_{ij}$ to random values in a specific interval like, [-1, 1] or [0, 1].
  • Assign topological neighborhood parameters.
  • Define learning rate $\alpha$ (say, 0.1).
Step 1: Until termination condition is reached, do loop: Steps 2-8
  • Step 2: For randomly chosen input vector $X$ from the set of training samples, do loop: Steps 3-5.
  • Step 3: Synaptic weight vector $W_{j}$ of the winning neuron $j$ for the input vector $X$. For each $j$, Euclidean Distance is calculated between a pair of (n X 1) vectors $X$ and $W_{j}$ is represented by-

        \[$D(j) = \left\|\mathbf{X}-\mathbf{W}_{j}\right\|=\left[\sum_{i=1}^{n}\left(x_{i}-w_{i j}\right)^{2}\right]^{1 / 2}$\]

    This is a criteria to finding similarity between two sets of samples. The nodes (neurons) in the network are evaluated to determine the most likely input vector according to its weights
  • Step 4: To select the winning neuron, $j_{\mathbf{X}}$, that best matches the input vector $X$, so that the D(j) is minimum.

        \[$j_{\mathbf{X}}(p)=\min _{j}\left\|\mathbf{X}-\mathbf{W}_{j}(p)\right\|=\left\{\sum_{i=1}^{n}\left[x_{i}-w_{i j}(p)\right]^{2}\right\}^{1 / 2}$\\ $j = 1, 2, \ldots, m$\]

    Where: $n$ is the number of neurons in the input layer, $m$ is the number of neurons in the Kohonen layer. The winning node is generally termed as the Best Matching Unit (BMU).
  • Step 5: Now it is the Learning phase; to update the synaptic weights. For all nodes $j$ within the neighborhood of that neuron, for every $i$:

        \[$w_{i j}(p+1)=w_{i j}(p)+\Delta w_{i j}(p)$\]

    Where $\Delta w_{i j}(p)$ is the weight correction at iteration $p$. This process to update weight is based on the competitive learning rule:

        \[$\Delta w_{i j}(p) = \left\{\begin{array}{cl}\alpha\left(x_{i}-w_{i j}(p)\right), & \text { if neuron } j \text { wins the competition } \\ 0, & \text { if neuron } j \text { loses the competition }\end{array}\right.$\]

    Where $\alpha$ is the learning rate. Here the neighborhood function centered around the winner-takes-all neuron $j_{\mathbf{X}}$ at iteration $p$. Any neurons within the radius of the BMU are modified to make them more similar to the input vector.
  • Step 6: Update the learning rate $\alpha$. Following the equation-

        \[$\alpha (t + 1) = 0.5 * \alpha(t)$\]

  • Step 7: At specified times reduce radius of topological neighborhood BMU. As per the clustering process progresses, the radius of the neighborhood around a cluster unit also decreases accordingly.
  • Step 8: Check for the termination condition.
  • Example with iterations: Take an input vector of 2 – Dimension: $\mathbf{X}=\left[\begin{array}{l}0.52 \\ 0.12\end{array}\right]$
    1. The initial weight vectors, $W_{\mathbf{j}}$, are given by
    2.     \[$\mathbf{W}_{1}=\left[\begin{array}{l}0.27 \\ 0.81\end{array}\right] \quad \mathbf{W}_{2}=\left[\begin{array}{l}0.42 \\ 0.70\end{array}\right] \quad \mathbf{W}_{3}=\left[\begin{array}{l}0.43 \\ 0.21\end{array}\right]$\]

    3. We find the winning (best-matching) neuron $j_{\mathbf{X}}$ satisfying the minimum distance Euclidean criterion:
    4.     \[$d_{1}=\sqrt{\left(x_{1}-w_{11}\right)^{2}+\left(x_{2}-w_{21}\right)^{2}}=\sqrt{(0.52-0.27)^{2}+(0.12-0.81)^{2}}=0.73$\\ $d_{2}=\sqrt{\left(x_{1}-w_{12}\right)^{2}+\left(x_{2}-w_{22}\right)^{2}}=\sqrt{(0.52-0.42)^{2}+(0.12-0.70)^{2}}=0.59$\\ $d_{3}=\sqrt{\left(x_{1}-w_{13}\right)^{2}+\left(x_{2}-w_{23}\right)^{2}}=\sqrt{(0.52-0.43)^{2}+(0.12-0.21)^{2}}=0.13$\]

    5. Neuron 3 is the winner and its weight vector $W_{\mathbf{3}}$ is updated following the competitive learning rule.
    6.     \[$\Delta w_{13}=\alpha\left(x_{1}-w_{13}\right)=0.1(0.52-0.43)=0.01$\\ $\Delta w_{23}=\alpha\left(x_{2}-w_{23}\right)=0.1(0.12-0.21)=-0.01$\]

    7. The updated weight vector $W_{\mathbf{3}}$ at iteration $(p + 1)$ is calculated as:
    8.     \[$\mathbf{W}_{3}(p+1)=\mathbf{W}_{3}(p)+\Delta \mathbf{W}_{3}(p)=\left[\begin{array}{l}0.43 \\ 0.21\end{array}\right]+\left[\begin{array}{r}0.01 \\ -0.01\end{array}\right]=\left[\begin{array}{l}0.44 \\ 0.20\end{array}\right]\]$

    9. The weight vector $W_{\mathbf{3}}$ of the winning neuron 3 becomes closer to the input vector $X$ in each iteration.


    Last Updated : 10 Jul, 2020
    Like Article
    Save Article
    Previous
    Next
    Share your thoughts in the comments
    Similar Reads