Open In App

Backpropagation in Data Mining

Last Updated : 05 Jan, 2023
Improve
Improve
Like Article
Like
Save
Share
Report

Backpropagation is an algorithm that backpropagates the errors from the output nodes to the input nodes. Therefore, it is simply referred to as the backward propagation of errors. It uses in the vast applications of neural networks in data mining like Character recognition, Signature verification, etc.

Neural Network:

Neural networks are an information processing paradigm inspired by the human nervous system. Just like in the human nervous system, we have biological neurons in the same way in neural networks we have artificial neurons, artificial neurons are mathematical functions derived from biological neurons. The human brain is estimated to have about 10 billion neurons, each connected to an average of 10,000 other neurons. Each neuron receives a signal through a synapse, which controls the effect of the signconcerning on the neuron.

Artificial Neural Network Structure

 

Backpropagation:

Backpropagation is a widely used algorithm for training feedforward neural networks. It computes the gradient of the loss function with respect to the network weights. It is very efficient, rather than naively directly computing the gradient concerning each weight. This efficiency makes it possible to use gradient methods to train multi-layer networks and update weights to minimize loss; variants such as gradient descent or stochastic gradient descent are often used.

The backpropagation algorithm works by computing the gradient of the loss function with respect to each weight via the chain rule, computing the gradient layer by layer, and iterating backward from the last layer to avoid redundant computation of intermediate terms in the chain rule.

Features of Backpropagation:

  1. it is the gradient descent method as used in the case of simple perceptron network with the differentiable unit.
  2. it is different from other networks in respect to the process by which the weights are calculated during the learning period of the network.
  3. training is done in the three stages : 
    • the feed-forward of input training pattern
    • the calculation and backpropagation of the error
    • updation of the weight

Working of Backpropagation:

Neural networks use supervised learning to generate output vectors from input vectors that the network operates on. It Compares generated output to the desired output and generates an error report if the result does not match the generated output vector. Then it adjusts the weights according to the bug report to get your desired output.

Backpropagation Algorithm:

Step 1: Inputs X, arrive through the preconnected path.

Step 2: The input is modeled using true weights W. Weights are usually chosen randomly.

Step 3: Calculate the output of each neuron from the input layer to the hidden layer to the output layer.

Step 4: Calculate the error in the outputs

Backpropagation Error= Actual Output – Desired Output

Step 5: From the output layer, go back to the hidden layer to adjust the weights to reduce the error.

Step 6: Repeat the process until the desired output is achieved.

 

 

Parameters :

  • x = inputs training vector x=(x1,x2,…………xn).
  • t = target vector t=(t1,t2……………tn).
  • δk = error at output unit.
  • δj  = error at hidden layer.
  • α = learning rate.
  • V0j = bias of hidden unit j.

Training Algorithm :

Step 1: Initialize weight to small random values.

Step 2: While the stepsstopping condition is to be false do step 3 to 10.

Step 3: For each training pair do step 4 to 9 (Feed-Forward).

Step 4: Each input unit receives the signal unit and transmitsthe signal xi signal to all the units.

Step 5 : Each hidden unit Zj (z=1 to a) sums its weighted input signal to calculate its net input 

                     zinj = v0j + Σxivij     ( i=1 to n)

           Applying activation function zj = f(zinj) and sends this signals to all units in the layer about i.e output units

           For each output l=unit yk = (k=1 to m) sums its weighted input signals.

                     yink = w0k + Σ ziwjk    (j=1 to a)

           and applies its activation function to calculate the output signals.

                     yk = f(yink)

Backpropagation Error :

Step 6: Each output unit yk (k=1 to n)  receives a target pattern corresponding to an input pattern then error is calculated as:

                   Î´k = ( tk – yk ) + yink 

Step 7: Each hidden unit Zj (j=1 to a) sums its input from all units in the layer above 

                  δinj = Σ δj wjk 

              The error information term is calculated as :

                  δj = δinj + zinj

Updation of weight and bias :

Step 8: Each output unit yk (k=1 to m) updates its bias and weight (j=1 to a). The weight correction term is given by :

                                        Δ wjk = α δk zj

                   and the bias correction term is given by  Î”wk = α δk.

                   therefore    wjk(new) = wjk(old) + Δ wjk

                                          w0k(new) = wok(old) + Δ wok

                  for each hidden unit zj (j=1 to a) update its bias and weights (i=0 to n) the weight connection term 

                                 Î” vij = α δj xi

                and the bias connection on term 

                                 Î” v0j = α δj

              Therefore vij(new) = vij(old) +   Δvij

                                   v0j(new) = v0j(old) +  Î”v0j

Step 9: Test the stopping condition. The stopping condition can be the minimization of error, number of epochs.

Need for Backpropagation:

Backpropagation is “backpropagation of errors” and is very useful for training neural networks. It’s fast, easy to implement, and simple. Backpropagation does not require any parameters to be set, except the number of inputs. Backpropagation is a flexible method because no prior knowledge of the network is required.

Types of Backpropagation

There are two types of backpropagation networks.

  • Static backpropagation: Static backpropagation is a network designed to map static inputs for static outputs. These types of networks are capable of solving static classification problems such as OCR (Optical Character Recognition).
  • Recurrent backpropagation: Recursive backpropagation is another network used for fixed-point learning. Activation in recurrent backpropagation is feed-forward until a fixed value is reached. Static backpropagation provides an instant mapping, while recurrent backpropagation does not provide an instant mapping.

Advantages:

  • It is simple, fast, and easy to program.
  • Only numbers of the input are tuned, not any other parameter.
  • It is Flexible and efficient.
  • No need for users to learn any special functions.

Disadvantages:

  • It is sensitive to noisy data and irregularities. Noisy data can lead to inaccurate results.
  • Performance is highly dependent on input data.
  • Spending too much time training.
  • The matrix-based approach is preferred over a mini-batch.


Like Article
Suggest improvement
Previous
Next
Share your thoughts in the comments

Similar Reads