Open In App

Hidden Layer Perceptron in TensorFlow

Improve
Improve
Like Article
Like
Save
Share
Report

In this article, we will learn about hidden layer perceptron. A hidden layer perceptron is nothing but a hi-fi terminology for a neural network with one or more hidden layers. The purpose which is being served by these hidden layers is that they help to learn complex and non-linear functions for a task.

Hidden Layer Perceptron in TensorFlow

Hidden Layer Perceptron in TensorFlow

The above image is the simplest representation of the hidden layer perceptron with a single hidden layer. Here we can see that the input for the final layer is the neurons of the hidden layers. So, in a hidden layer perceptron network input for the current layer is the output of the previous layer.

We will try to understand how one can implement a Hidden layer perceptron network using TensorFlow. Also, the data used for this purpose is the famous Facial Recognition dataset.

Importing Libraries and Dataset

  • Pandas – This library helps to load the data frame in a 2D array format and has multiple functions to perform analysis tasks in one go.
  • Numpy – Numpy arrays are very fast and can perform large computations in a very short time.
  • Matplotlib – This library is used to draw visualizations.
  • Sklearn – This module contains multiple libraries having pre-implemented functions to perform tasks from data preprocessing to model development and evaluation.
  • OpenCV – This is an open-source library mainly focused on image processing and handling.
  • Tensorflow – This is an open-source library that is used for Machine Learning and Artificial intelligence and provides a range of functions to achieve complex functionalities with single lines of code.

Python3




import numpy as np
import pandas as pd
import seaborn as sb
import matplotlib.pyplot as plt
  
import cv2
from glob import glob
import tensorflow as tf
from tensorflow import keras
from keras import layers
  
from tqdm.notebook import tqdm, trange
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder, StandardScaler
  
import warnings
warnings.filterwarnings('ignore')


Now let’s create a data frame of the image path and the classes from which they belong. Creating a data frame helps us to analyze the distribution of the data across various classes.

Python3




images = glob('images/train/*/*.jpg')
len(images)


Output:

28821

Python3




df = pd.DataFrame({'image_path': images})
df.head()


Output:

Hidden Layer Perceptron in TensorFlow

 

Python3




df['label'] = df['image_path'].str.split('/', expand=True)[2]
df.head()


Output:

Hidden Layer Perceptron in TensorFlow

 

Python3




df.groupby('label').count().plot.bar()
plt.show()


Output:

Bar Chart to visualise number of images in each class

Bar Chart to visualize number of images in each class

Data Visualization

Here we can certainly say that this dataset is not balanced but in this article, our main motive is to learn what a hidden layer perceptron is and how can we use it.

Python3




plt.subplots(figsize=(10, 6))
emotions = df['label'].unique()
for i, emotion in enumerate(emotions):
    plt.subplot(2, 4, i+1)
    x = df[df['label'] == emotion].image_path
    path = x.values[5]
    img = cv2.imread(path)
    plt.imshow(img)
    plt.title(emotion)
plt.show()


Output:

Sample image from each class

Sample image from each class

Python3




X, Y = [], []
for path in tqdm(images):
    img = cv2.imread(path, cv2.IMREAD_GRAYSCALE)
    X.append(img.flatten())
    Y.append(path.split('/')[2])
  
le = LabelEncoder()
Y = le.fit_transform(Y)


Now let’s convert the image list as a NumPy array and convert the labels as one-hot encoded vectors from the 7 classes.

Python3




X = np.asarray(X)
Y = pd.get_dummies(Y).values
  
X.shape, Y.shape


Output:

((28821, 2304), (28821, 7))

Now to evaluate the performance of the model as the training goes on we need to split the whole data into training as well as the training data.

Python3




X_train, X_val,\
    Y_train, Y_val = train_test_split(X, Y,
                                      test_size=0.05,
                                      random_state=10)
X_train.shape, X_val.shape


Output:

((27379, 2304), (1442, 2304))

Python3




scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_val = scaler.transform(X_val)


Model Architecture

We will implement a Sequential model which will contain the following parts:

  • Then we will have two fully connected layers followed by the output of the flattened layer.
  • We have included some BatchNormalization layers to enable stable and fast training and a Dropout layer before the final layer to avoid any possibility of overfitting.
  • The final layer is the output layer which outputs soft probabilities for the seven classes. 

Now we will be implementing a neural network with two hidden layers with 256 neurons each. These hidden layers are nothing but hidden layer perceptrons.

Python3




model = keras.Sequential([
    layers.Dense(256, activation='relu', input_shape=[2304]),
    layers.BatchNormalization(),
    layers.Dense(256, activation='relu'),
    layers.Dropout(0.3),
    layers.BatchNormalization(),
    layers.Dense(7, activation='softmax')
])
  
model.compile(
    loss='categorical_crossentropy',
    optimizer='adam',
    metrics=['AUC']
)


While compiling a model we provide these three essential parameters:

  • optimizer – This is the method that helps to optimize the cost function by using gradient descent.
  • loss – The loss function by which we monitor whether the model is improving with training or not.
  • metrics – This helps to evaluate the model by predicting the training and the validation data.

Let’s print the summary of our hidden layer perceptron model to understand the number of parameters present.

Python3




model.summary()


Output:

Model: "sequential"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 dense (Dense)               (None, 256)               590080    
                                                                 
 batch_normalization (BatchN  (None, 256)              1024      
 ormalization)                                                   
                                                                 
 dense_1 (Dense)             (None, 256)               65792     
                                                                 
 dropout (Dropout)           (None, 256)               0         
                                                                 
 batch_normalization_1 (Batc  (None, 256)              1024      
 hNormalization)                                                 
                                                                 
 dense_2 (Dense)             (None, 7)                 1799      
                                                                 
=================================================================
Total params: 659,719
Trainable params: 658,695
Non-trainable params: 1,024
_________________________________________________________________

Model Training and Evaluation

Now we are ready to train our model.

Python3




model.fit(X_train, Y_train,
          epochs=5,
          batch_size=64,
          verbose=1,
          validation_data=(X_val, Y_val))


Output:

Epoch 1/5
428/428 [==============================] - 5s 8ms/step - loss: 1.8563 - auc: 0.6886 - val_loss: 1.6245 - val_auc: 0.7530
Epoch 2/5
428/428 [==============================] - 3s 7ms/step - loss: 1.6319 - auc: 0.7554 - val_loss: 1.5624 - val_auc: 0.7769
Epoch 3/5
428/428 [==============================] - 4s 8ms/step - loss: 1.5399 - auc: 0.7845 - val_loss: 1.5510 - val_auc: 0.7814
Epoch 4/5
428/428 [==============================] - 5s 11ms/step - loss: 1.4883 - auc: 0.7999 - val_loss: 1.5106 - val_auc: 0.7929
Epoch 5/5
428/428 [==============================] - 3s 8ms/step - loss: 1.4408 - auc: 0.8146 - val_loss: 1.4992 - val_auc: 0.7971

By using this neural network with two hidden layers we have achieved a 0.8 AUC-ROC score which implies that the predictions made will be around 80% accurate.

Python3




results = model.evaluate(X_val, Y_val, verbose=0)
print('Validation loss :', results[0])
print('Validation Accuracy :', results[1])


Output:

Validation loss : 1.4992401599884033
Validation Accuracy : 0.7971429824829102


Last Updated : 09 Jan, 2023
Like Article
Save Article
Previous
Next
Share your thoughts in the comments
Similar Reads