Open In App

Build a Neural Network Classifier in R

Last Updated : 12 Oct, 2023
Improve
Improve
Like Article
Like
Save
Share
Report

Creating a neural network classifier in R can be done using the popular deep learning framework called Keras, which provides a high-level interface to build and train neural networks. Here’s a step-by-step guide on how to build a simple neural network classifier using Keras in R Programming Language.

Before diving into building our own neural network classifier in R using Keras, it’s essential to understand some fundamental concepts and information about neural networks and the tools you’ll be using.

Neural Networks

  • Neural networks are a type of machine-learning model inspired by the structure of the human brain.
  • They consist of interconnected layers of artificial neurons (perceptrons) that process and transform input data to produce an output.
  • Neural networks are widely used for various tasks, including image classification, natural language processing, and regression.

Keras

  • Keras is an open-source deep learning framework that provides a high-level interface for building and training neural networks.
  • It runs on top of popular deep learning libraries like TensorFlow and Theano, making it easy to work with these backends.
  • Keras simplifies the process of designing neural network architectures, making it accessible to both beginners and experts.

Data Preprocessing

  • Before feeding data into a neural network, it’s crucial to preprocess it. Common preprocessing steps include data splitting, normalization, and encoding of labels.
  • Data should typically be split into training and testing sets to evaluate model performance.
  • Feature scaling, such as normalization or standardization, helps improve model convergence.
  • For classification tasks, labels are often one-hot encoded to represent categorical classes.

Neural Network Architecture

  • A neural network consists of layers, including input, hidden, and output layers.
  • The number of neurons in each layer and the activation functions used determine the model’s architecture.
  • Common activation functions include ReLU (Rectified Linear Unit) for hidden layers and softmax for multi-class classification in the output layer.

Model Compilation

  • To train a neural network, you need to compile it with specific configurations, including the choice of loss function, optimizer, and evaluation metrics.
  • The loss function quantifies the error between predicted and actual values.
  • The optimizer updates the model’s weights to minimize the loss function.
  • Evaluation metrics, like accuracy, are used to monitor model performance during training.

Training and Evaluation

  • Training a neural network involves feeding it with labeled data, adjusting the weights through backpropagation, and minimizing the loss.
  • Training is performed for a fixed number of epochs (iterations) with a specified batch size.
  • After training, the model is evaluated on a separate test dataset to assess its performance using metrics like accuracy.
  • Now that you have a better understanding of these concepts, you can proceed with building your own neural network classifier in R using Keras, following the step-by-step instructions provided in the previous response. Feel free to adapt and modify the code to suit your specific dataset and classification task.

Neural Network With synthetic dataset for binary classification

Install and load the required packages.

Make sure you have R and RStudio installed. Install the keras package if you haven’t already.

R




install.packages("keras")
#Load the necessary libraries
library(keras)


Create our dataset

You’ll need a dataset to train and test your neural network classifier. You can load a dataset of your choice or use a dataset for demonstration purposes.

R




# Create a synthetic dataset for binary classification
set.seed(123)
num_samples <- 1000
data <- data.frame(
  Feature1 = runif(num_samples),
  Feature2 = runif(num_samples),
  Label = sample(0:1, num_samples, replace = TRUE)
)


Preprocess the data

You should preprocess your data by splitting it into training and testing sets, normalizing the features, and converting the labels to one-hot encoded vectors if necessary.

R




# Split the dataset into training and testing sets
split_ratio <- 0.8
num_train_samples <- floor(num_samples * split_ratio)
train_data <- data[1:num_train_samples, ]
test_data <- data[(num_train_samples + 1):num_samples, ]
 
# Prepare the data for training
train_features <- as.matrix(train_data[, c("Feature1", "Feature2")])
train_labels <- to_categorical(train_data$Label, num_classes = 2)
test_features <- as.matrix(test_data[, c("Feature1", "Feature2")])
test_labels <- to_categorical(test_data$Label, num_classes = 2)


Build the neural network model

Create a simple neural network model using the Keras Sequential API. Here’s an example with one hidden layer.

R




# Build the neural network model
model <- keras_model_sequential() %>%
  layer_dense(units = 16, activation = 'relu', input_shape = c(2)) %>%
  layer_dense(units = 2, activation = 'softmax')


Compile the model

Specify the loss function, optimizer, and evaluation metric for your model.

R




# Compile the model
model %>% compile(
  loss = 'categorical_crossentropy',
  optimizer = optimizer_adam(),
  metrics = c('accuracy')
)
 
# Print the model summary
summary(model)


Output:

Model: "sequential_5"
______________________________________________________________________________________
Layer (type) Output Shape Param #
======================================================================================
dense_11 (Dense) (None, 16) 48
dense_10 (Dense) (None, 2) 34
======================================================================================
Total params: 82
Trainable params: 82
Non-trainable params: 0
___________________________________________________________________

Train the model

Fit the model to your training data.

R




# Train the model
history <- model %>% fit(
  x = train_features,
  y = train_labels,
  epochs = 50,
  batch_size = 32,
  validation_split = 0.2
)


Output:

Epoch 1/50
20/20 [==============================] - 9s 250ms/step - loss: 0.6921 - accuracy: 0.5312 - val_loss: 0.6939 - val_accuracy: 0.5250
Epoch 2/50
20/20 [==============================] - 1s 32ms/step - loss: 0.6919 - accuracy: 0.5391 - val_loss: 0.6937 - val_accuracy: 0.5125
Epoch 3/50
20/20 [==============================] - 1s 48ms/step - loss: 0.6917 - accuracy: 0.5328 - val_loss: 0.6937 - val_accuracy: 0.5188
Epoch 4/50
20/20 [==============================] - 1s 42ms/step - loss: 0.6917 - accuracy: 0.5312 - val_loss: 0.6935 - val_accuracy: 0.5125
Epoch 5/50
20/20 [==============================] - 1s 46ms/step - loss: 0.6918 - accuracy: 0.5375 - val_loss: 0.6936 - val_accuracy: 0.5000
Epoch 6/50
20/20 [==============================] - 1s 38ms/step - loss: 0.6915 - accuracy: 0.5375 - val_loss: 0.6936 - val_accuracy: 0.5125
Epoch 7/50
20/20 [==============================] - 1s 35ms/step - loss: 0.6915 - accuracy: 0.5312 - val_loss: 0.6934 - val_accuracy: 0.5125
Epoch 8/50
20/20 [==============================] - 1s 55ms/step - loss: 0.6914 - accuracy: 0.5375 - val_loss: 0.6936 - val_accuracy: 0.5063
Epoch 9/50
20/20 [==============================] - 1s 55ms/step - loss: 0.6915 - accuracy: 0.5344 - val_loss: 0.6935 - val_accuracy: 0.5125
Epoch 10/50
20/20 [==============================] - 1s 52ms/step - loss: 0.6913 - accuracy: 0.5344 - val_loss: 0.6935 - val_accuracy: 0.5063
Epoch 11/50
20/20 [==============================] - 1s 40ms/step - loss: 0.6912 - accuracy: 0.5297 - val_loss: 0.6935 - val_accuracy: 0.5063
Epoch 12/50
20/20 [==============================] - 1s 42ms/step - loss: 0.6914 - accuracy: 0.5297 - val_loss: 0.6935 - val_accuracy: 0.4938
Epoch 13/50
20/20 [==============================] - 1s 43ms/step - loss: 0.6912 - accuracy: 0.5266 - val_loss: 0.6935 - val_accuracy: 0.5063
Epoch 14/50
20/20 [==============================] - 1s 40ms/step - loss: 0.6912 - accuracy: 0.5281 - val_loss: 0.6934 - val_accuracy: 0.5063
Epoch 15/50
20/20 [==============================] - 1s 42ms/step - loss: 0.6916 - accuracy: 0.5250 - val_loss: 0.6939 - val_accuracy: 0.4938
Epoch 16/50
20/20 [==============================] - 1s 41ms/step - loss: 0.6911 - accuracy: 0.5312 - val_loss: 0.6937 - val_accuracy: 0.4938
Epoch 17/50
20/20 [==============================] - 1s 40ms/step - loss: 0.6915 - accuracy: 0.5359 - val_loss: 0.6932 - val_accuracy: 0.5063
Epoch 18/50
20/20 [==============================] - 1s 38ms/step - loss: 0.6911 - accuracy: 0.5281 - val_loss: 0.6934 - val_accuracy: 0.5063
Epoch 19/50
20/20 [==============================] - 1s 35ms/step - loss: 0.6911 - accuracy: 0.5266 - val_loss: 0.6936 - val_accuracy: 0.4875
Epoch 20/50
20/20 [==============================] - 1s 34ms/step - loss: 0.6911 - accuracy: 0.5219 - val_loss: 0.6936 - val_accuracy: 0.5000
Epoch 21/50
20/20 [==============================] - 1s 47ms/step - loss: 0.6912 - accuracy: 0.5266 - val_loss: 0.6937 - val_accuracy: 0.4938
Epoch 22/50
20/20 [==============================] - 1s 43ms/step - loss: 0.6911 - accuracy: 0.5328 - val_loss: 0.6938 - val_accuracy: 0.4875
Epoch 23/50
20/20 [==============================] - 1s 39ms/step - loss: 0.6912 - accuracy: 0.5203 - val_loss: 0.6933 - val_accuracy: 0.5125
Epoch 24/50
20/20 [==============================] - 1s 36ms/step - loss: 0.6912 - accuracy: 0.5234 - val_loss: 0.6936 - val_accuracy: 0.5000
Epoch 25/50
20/20 [==============================] - 1s 42ms/step - loss: 0.6913 - accuracy: 0.5203 - val_loss: 0.6933 - val_accuracy: 0.5125
Epoch 26/50
20/20 [==============================] - 1s 48ms/step - loss: 0.6912 - accuracy: 0.5266 - val_loss: 0.6936 - val_accuracy: 0.5063
Epoch 27/50
20/20 [==============================] - 1s 51ms/step - loss: 0.6912 - accuracy: 0.5250 - val_loss: 0.6937 - val_accuracy: 0.4938
Epoch 28/50
20/20 [==============================] - 1s 50ms/step - loss: 0.6910 - accuracy: 0.5250 - val_loss: 0.6938 - val_accuracy: 0.4938
Epoch 29/50
20/20 [==============================] - 1s 36ms/step - loss: 0.6913 - accuracy: 0.5266 - val_loss: 0.6940 - val_accuracy: 0.5000
Epoch 30/50
20/20 [==============================] - 1s 33ms/step - loss: 0.6912 - accuracy: 0.5234 - val_loss: 0.6937 - val_accuracy: 0.4938
Epoch 31/50
20/20 [==============================] - 1s 51ms/step - loss: 0.6911 - accuracy: 0.5250 - val_loss: 0.6937 - val_accuracy: 0.4875
Epoch 32/50
20/20 [==============================] - 1s 44ms/step - loss: 0.6911 - accuracy: 0.5219 - val_loss: 0.6939 - val_accuracy: 0.5063
Epoch 33/50
20/20 [==============================] - 1s 44ms/step - loss: 0.6911 - accuracy: 0.5234 - val_loss: 0.6936 - val_accuracy: 0.5000
Epoch 34/50
20/20 [==============================] - 1s 41ms/step - loss: 0.6912 - accuracy: 0.5266 - val_loss: 0.6937 - val_accuracy: 0.4938
Epoch 35/50
20/20 [==============================] - 1s 35ms/step - loss: 0.6913 - accuracy: 0.5250 - val_loss: 0.6938 - val_accuracy: 0.5063
Epoch 36/50
20/20 [==============================] - 1s 43ms/step - loss: 0.6911 - accuracy: 0.5219 - val_loss: 0.6937 - val_accuracy: 0.5063
Epoch 37/50
20/20 [==============================] - 1s 40ms/step - loss: 0.6911 - accuracy: 0.5250 - val_loss: 0.6938 - val_accuracy: 0.4938
Epoch 38/50
20/20 [==============================] - 1s 42ms/step - loss: 0.6912 - accuracy: 0.5188 - val_loss: 0.6938 - val_accuracy: 0.4875
Epoch 39/50
20/20 [==============================] - 1s 37ms/step - loss: 0.6911 - accuracy: 0.5281 - val_loss: 0.6941 - val_accuracy: 0.5063
Epoch 40/50
20/20 [==============================] - 1s 41ms/step - loss: 0.6911 - accuracy: 0.5234 - val_loss: 0.6941 - val_accuracy: 0.5063
Epoch 41/50
20/20 [==============================] - 1s 41ms/step - loss: 0.6912 - accuracy: 0.5281 - val_loss: 0.6940 - val_accuracy: 0.5000
Epoch 42/50
20/20 [==============================] - 1s 43ms/step - loss: 0.6911 - accuracy: 0.5297 - val_loss: 0.6940 - val_accuracy: 0.4938
Epoch 43/50
20/20 [==============================] - 1s 37ms/step - loss: 0.6910 - accuracy: 0.5234 - val_loss: 0.6938 - val_accuracy: 0.4938
Epoch 44/50
20/20 [==============================] - 1s 35ms/step - loss: 0.6911 - accuracy: 0.5234 - val_loss: 0.6938 - val_accuracy: 0.5000
Epoch 45/50
20/20 [==============================] - 1s 39ms/step - loss: 0.6910 - accuracy: 0.5219 - val_loss: 0.6938 - val_accuracy: 0.4938
Epoch 46/50
20/20 [==============================] - 1s 46ms/step - loss: 0.6913 - accuracy: 0.5203 - val_loss: 0.6938 - val_accuracy: 0.5000
Epoch 47/50
20/20 [==============================] - 1s 40ms/step - loss: 0.6911 - accuracy: 0.5188 - val_loss: 0.6941 - val_accuracy: 0.5000
Epoch 48/50
20/20 [==============================] - 1s 34ms/step - loss: 0.6911 - accuracy: 0.5203 - val_loss: 0.6940 - val_accuracy: 0.5063
Epoch 49/50
20/20 [==============================] - 1s 40ms/step - loss: 0.6911 - accuracy: 0.5234 - val_loss: 0.6940 - val_accuracy: 0.4938
Epoch 50/50
20/20 [==============================] - 1s 37ms/step - loss: 0.6911 - accuracy: 0.5266 - val_loss: 0.6942 - val_accuracy: 0.5000

gh

Build your own neural network classifier in R

Final Result

R




history


Output:

Final epoch (plot to see history)
loss: 0.6908
accuracy: 0.5109
val_loss: 0.6943
val_accuracy: 0.5375

Evaluate the model

Once the training is complete, evaluate the model on the test dataset.

R




eval_result <- model %>% evaluate(
  x = test_features,
  y = test_labels
)
 
cat("Test loss:", eval_result[1], "\n")
cat("Test accuracy:", eval_result[2], "\n")


Output:

7/7 [==============================] - 0s 5ms/step - loss: 0.6948 - accuracy: 0.5050
Test loss: 0.6948242
Test accuracy: 0.505

That’s it! we ‘ve built and trained a neural network classifier in R using Keras. You can adjust the architecture, hyperparameters, and dataset as needed for your specific classification task.



Like Article
Suggest improvement
Share your thoughts in the comments

Similar Reads