Open In App

Classification Using Sklearn Multi-layer Perceptron

Last Updated : 11 Oct, 2023
Improve
Improve
Like Article
Like
Save
Share
Report

A key machine learning method that belongs to the class of artificial neural networks is classification using Multi-Layer Perceptrons (MLP). It is a flexible and effective method for tackling a variety of classification problems, including text classification and picture recognition. Traditional linear classifiers might not be up to the challenge, but MLPs are known for their capacity to model complicated, non-linear relationships in data. In this article, we’ll look at how to use the popular Python machine learning framework scikit-learn to implement categorization using MLPs.

Architecture and Working of Multi-Layer Perceptron

A Multi-Layer Perceptron (MLP) is a sort of artificial neural network that has multiple layers of connected nodes (also known as neurons) and is frequently used for different machine-learning tasks, including classification and regression. An overview of an MLP’s structure and operation is provided below:

Architecture

  • Input Layer: The input layer is made up of neurons that directly take in the dataset’s features. Each neuron in the input layer represents a feature, and the input layer’s total number of neurons is equal to the dataset’s total number of features.
  • Hidden Layer: One or more hidden layers may exist between the input and output layers. The number of neurons in each hidden layer, which is a hyperparameter that you can choose, varies depending on the hidden layer. In order to recognize intricate patterns in the data, these hidden layers are essential.
  • Output Layer: The final predictions or outputs are generated by the output layer using the data processed in the hidden levels. The task’s requirements determine how many neurons are present in the output layer:
    • There is often only one neuron that generates a probability score for binary categorization.
    • There are as many neurons involved in multi-class classification as there are classes, and each neuron generates a probability score for a particular class.
    • One neuron produces the continuous projected value for regression problems.

Working

  • Initialization: Set all of the network’s neurons’ weights (W) and biases (B) to their initial values. Usually, modest random numbers are used as initial values for these parameters.
  • Forward Propagation: Input data is passed through the network repeatedly during training. Each neuron in a layer takes in the weighted total of the inputs from the layer before it, applies an activation function, and sends the outcome to the layer after it. The model’s non-linearity is introduced via the activation functions, which enables it to learn intricate correlations.
  • Loss Calculation: A loss (error) is computed by comparing the network’s output to the actual goal values. Mean Squared Error (MSE) for regression and Cross-Entropy for classification are examples of common loss functions.
  • Backpropagation: In order to reduce the loss, the network modifies its biases and weights. The backpropagation algorithm accomplishes this by calculating gradients of the loss with respect to each network parameter. Through optimization techniques like Gradient Descent, these gradients are used to update the weights and biases.
  • Training: The forward propagation, loss estimation, and backpropagation processes are iterated across a number of iterations (epochs) until the model converges to a solution. A hyperparameter that can be modified is the learning rate and the number of iterations.
  • Prediction: By using forward propagation with the honed weights and biases, the MLP may be trained to make predictions on new, unobserved data.

Although MLPs are well renowned for their capacity to represent complicated relationships in data, they can be sensitive to certain hyperparameters, including the number of hidden layers and neurons, the choice of activation functions, and regularization strategies. For MLPs to operate well, proper hyperparameter adjustment is crucial.

Implementation

To perform regression using the Perceptron algorithm, we need to follow specific steps. Here’s an overview:

Importing Libraries

Python3




# Import necessary libraries
from sklearn.neural_network import MLPClassifier
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix


To create an MLP (Multi-Layer Perceptron) classifier using Scikit-Learn, load the necessary libraries using the code snippet below. It involves importing metrics for model evaluation, including accuracy, classification report, and confusion matrix, as well as loading the Breast Cancer dataset, partitioning the data, standardizing features, and loading the features.

Loading Dataset

Python3




# Load the Breast Cancer dataset
cancer_data = load_breast_cancer()
X, y = cancer_data.data, cancer_data.target


This code uses the load_breast_cancer() function to load the Breast Cancer dataset from Scikit-Learn. The relevant target labels are assigned to variable y, while the feature data is assigned to variable X. To categorize breast cancer tumors as either malignant or benign, this dataset is frequently utilized for binary classification tasks. The target labels (y) reflect the tumors’ corresponding classifications, while the feature data (X) represents the different properties of the tumors.

Splitting dataset into train and test sets

Python3




# Split the dataset into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(
    X, y, test_size=0.2, random_state=42)


Using the train_test_split function from scikit-learn, this snippet of code divides the Breast Cancer dataset into training and testing sets. X_train (training features), X_test (testing features), y_train (training labels), and y_test (testing labels) are four subsets of the X and y variables, which comprise the feature data and target labels. The machine learning model will be trained using the remaining 80% of the data, and the test_size parameter is set to 0.2, which indicates that 20% of the data will be utilized for testing. By setting a constant random seed for the data split, the random_state parameter ensures reproducibility.

Feature Scaling

Python3




# Standardize features by removing the mean and scaling to unit variance
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)


The Scikit-Learn StandardScaler is used in this code to conduct feature standardization for machine learning. First, it scales to unit variance and removes the mean from the training data (X_train), standardizing the data. After that, the testing data (X_test) receives the same transformation. Standardization makes ensuring that every feature is on a uniform scale, which enhances the performance of machine learning models that use feature magnitudes.

Model Development

Python3




# Create an MLPClassifier model
mlp = MLPClassifier(hidden_layer_sizes=(64, 32),
                    max_iter=1000, random_state=42)


The Multi-Layer Perceptron Classifier, or MLPClassifier, is created by this code. It describes the two hidden layers of the neural network’s design, each with 64 and 32 neurons. The random_state option ensures reproducibility of results by seeding random number creation, while the max_iter parameter specifies the maximum number of iterations for the solver to converge during training.

Training and Prediction

Python3




# Train the model on the training data
mlp.fit(X_train, y_train)
 
# Make predictions on the test data
y_pred = mlp.predict(X_test)
 
# Calculate the accuracy of the model
accuracy = accuracy_score(y_test, y_pred)
print(f"Accuracy: {accuracy:.2f}")


Output:

Accuracy: 0.97

This code uses the fit technique to train the MLPClassifier model on the training data. It modifies its internal settings to recognize patterns in the data. Next, predictions are formed using the trained model on the test data and are contrasted with the actual labels. By counting the number of predictions that match the actual labels, the accuracy_score function determines the model’s accuracy, and the result is displayed as a percentage.

Evaluation

Python3




# Generate a classification report
class_report = classification_report(y_test, y_pred)
print("Classification Report:\n", class_report)


Output: Classification Report

Classification Report:
precision recall f1-score support
0 0.98 0.95 0.96 43
1 0.97 0.99 0.98 71
accuracy 0.97 114
macro avg 0.97 0.97 0.97 114
weighted avg 0.97 0.97 0.97 114

Classification report provides specific performance metrics for each class is prepared.

Advantages of Classification using Multi layer Perceptron

  • Non-Linearity Handling: MLPs are excellent for a variety of classification problems because they are able to simulate complicated, non-linear relationships between features and target classes.
  • Scalability: Due to improvements in technology and libraries (such as TensorFlow and PyTorch), it is now possible to train massive MLPs on enormous datasets, making them useful in a variety of fields.
  • Feature Learning: Since MLPs automatically learn important features from the data, substantial feature engineering is not as necessary.
  • Parallel Processing: On contemporary technology, parallelizing training and inference in MLPs can speed up execution times.


Like Article
Suggest improvement
Share your thoughts in the comments

Similar Reads