Open In App

Training of ANN in Data Mining

Last Updated : 22 Nov, 2022
Improve
Improve
Like Article
Like
Save
Share
Report

The term “artificial neural network” (ANN) refers to a hardware or software system in information technology (IT) that copies the functioning of neurons in the human brain. A class of deep learning technology, ANNs (also known as neural networks) are a subset of AI (artificial intelligence). They were originally developed
from the inspiration of human brains. They are basic units of human brains.

 ANN in Data Mining

 

Data mining is the term used to describe the process of extracting value from a database. A data warehouse is a location where information is stored.

Training of ANN :

We can train the neural network by feeding it by teaching patterns and letting it change its weight according to some learning rule. We can categorize the learning situations as follows.

  1. Supervised Learning: In which the network is trained by providing it with input and matching output patterns. And these input-output pairs can be provided by an external system that contains the neural network.
  2. Unsupervised Learning: In which output is trained to respond to a cluster of patterns within the input. Unsupervised learning uses a machine learning algorithm to analyze and cluster unlabeled datasets.
  3. Reinforcement Learning: This type of learning may be considered as an intermediate form of the above two types of learning, which trains the model to return an optimum solution for a problem by taking a sequence of decisions by itself. 

Another method of teaching artificial neural networks is Backpropagation Algorithm. It is a commonly used method for teaching artificial neural networks. The backpropagation algorithm is used feed-forward ANNs.The motive of the backpropagation algorithm is to reduce this error until the ANN learns the training data.

Steps of Backpropagation Algorithm:

  1. Present the training sample to the neural network.
  2. Compare the ANN’s Output to the wanted output from the data.
  3. Calculate the error in each output neuron.
  4. For each neuron, calculate the scaling factor, output, and how much lower or higher the output should be to match the desired output. This is a local error.

Algorithm:

1. Initialize the weights in the network. 

2. Repeat.

  • 1. O =neural-net-output(network, e) ; forward pass
  •  T = teacher output for e
  • Calculate the error (T – O) at the output units
  • Compute delta_wi for all weights from the hidden layer to
    output layer; backward pass
  • Compute delta_wi for all weights from the input layer to
    hidden layer; backward pass continued
  • Update the weights in the network

3. Until all examples are classified correctly or the stopping criterion is satisfied return(network)

Key Steps for Training a Neural Network:

Pick a neural network architecture. This implies that you shall be pondering primarily upon the connectivity patterns of the neural network including some of the following aspects:

  • A number of input nodes:  The way to identify a number of input nodes is to identify the number of features.
    • A number of hidden layers: The default is to use a single or one hidden layer. This is the most common practice.
    • The number of nodes in each of the hidden layers: In the case of using multiple hidden layers, the best practice is to use the same number of nodes in each hidden layer. In general practice, the number of hidden units is taken as a comparable number to that of a number of input nodes. That means one could take either the same number of hidden nodes as input nodes or maybe twice or thrice the number of input nodes.
       
  • A number of output nodes: The way to identify a number of output nodes is to identify the number of output classes you want the neural network to process.
    • Random Initialization of Weights: The weights are randomly initialized to a value between 0 and 1, or rather, very close to zero.
    • Implementation of forward propagation algorithm to calculate hypothesis function for a set of input vectors for any of the hidden layers.
    • Implementation of the cost function for optimizing parameter values. One may recall that the cost function would help determine how well the neural network fits the training data.
    • Implementation of a backpropagation algorithm to compute the error vector related to each of the nodes.
    • Use the gradient checking method to compare the gradient calculated using partial derivatives of a cost function using backpropagation and using a numerical estimate of the cost function gradient. The gradient checking method is used to validate if the implementation of the backpropagation method is correct.
    • Use gradient descent or advanced optimization technique with backpropagation to try and minimize the cost function as a function of parameters or weights.

The Iterative Learning Process:

During this literacy phase, the network learns by conforming the weights so as to be suitable to prognosticate the correct class marker of input samples. Neural network literacy is also appertained to as” connectionist literacy,” due to the connections between the units. The advantages of neural networks include their high forbearance to noisy data, as well as their capability to classify patterns on which they’ve not been trained.


Like Article
Suggest improvement
Share your thoughts in the comments

Similar Reads