Open In App

XOR Implementation in Tensorflow

Improve
Improve
Like Article
Like
Save
Share
Report

In this article, we’ll learn how to implement an XOR gate in Tensorflow. Before we move onto Tensorflow implementation we’ll have a look at how the XOR Gate Truth Table to get a deep understanding about XOR.

X

Y

X (XOR) Y

0

0

0

0

1

1

1

0

1

1

1

0

From the above truth table, we come to know that the output of the gate is 1 only when one of the inputs is 1. If both the inputs are identical then the output is 0. Now that we know how an XOR gate works let us start with the implementation of XOR using Tensorflow.

Approach

We’ll start with the implementation of XOR using tensorflow.

Step 1: Importing all the required libraries. Here we are using tensorflow and numpy

import tensorflow.compat.v1 as tf
tf.disable_v2_behaviour()
import numpy as np

Step 2: Create placeholders for input and output. The input will be of the shape (4 X 2) and output will be of the shape (4 × 1).

X = tf.placeholder(dtype=tf.float32, shape=(4,2))
Y = tf.placeholder(dtype=tf.float32, shape=(4,1))

Step 3: Create training input and output. 

INPUT_XOR = [[0,0],[0,1],[1,0],[1,1]]
OUTPUT_XOR = [[0],[1],[1],[0]]

Step 4: Give a standard learning rate and the number of epochs that the model should train for.

learning_rate = 0.01
epochs = 10000

Step 5: Create a hidden layer for the model. The hidden layers have weights and biases. The operation of the hidden layer is to multiply the input provided with the weights and then add biases to the product. This answer is then given to a Relu activation function to give the output to the next layer.

with tf.variable_scope(‘hidden’):

    h_w = tf.Variable(tf.truncated_normal([2, 2]), name=’weights’)

    h_b = tf.Variable(tf.truncated_normal([4, 2]), name=’biases’)

    h = tf.nn.relu(tf.matmul(X, h_w) + h_b)

Step 6: Create an Output Layer for the model. The output layer similar to the hidden layers has weights and biases and does the same functionalities but instead of a Relu Activation, we use the Sigmoid Activation function to get outputs between 0 and 1.

 with tf.variable_scope(‘output’):

    o_w = tf.Variable(tf.truncated_normal([2, 1]), name=’weights’)

    o_b = tf.Variable(tf.truncated_normal([4, 1]), name=’biases’)

    Y_estimation = tf.nn.sigmoid(tf.matmul(h, o_w) + o_b)

Step 7: Create a loss/cost function. This calculates the cost for the model to train on the given data. Here we do RMSE of the predicted output value and the actual output value. RMSE — Root Mean Square Error.

with tf.variable_scope('cost'):
    cost = tf.reduce_mean(tf.squared_difference(Y_estimation, Y))

Step 8: Create a training variable to train the model with the given cost/loss function with an ADAM Optimizer with the given learning rate in order to minimize the loss.

with tf.variable_scope('train'):
    train = tf.train.AdamOptimizer(learning_rate).minimize(cost)

Step 9: Now that all the required things are initialized we’ll start a Tensorflow Session and start Training by initializing all the variables declared above.

with tf.Session() as session:
    session.run(tf.global_variables_initializer())
    print("Training Started")

Step 10: Train the model and give out predictions. Here we run the training on input and output since we are doing supervised learning. Then we calculate cost for every 1000 epochs and in the end predict the output and test it against the actual output.

log_count_frac = epochs/10
    for epoch in range(epochs):
    
        # Training the base network
        session.run(train, feed_dict={X: INPUT_XOR, Y:OUTPUT_XOR})

        # log training parameters
        # Print cost for every 1000 epochs
        if epoch % log_count_frac == 0:
            cost_results = session.run(cost, feed_dict={X: INPUT_XOR, Y:OUTPUT_XOR})
            print("Cost of Training at epoch {0} is {1}".format(epoch, cost_results))

    print("Training Completed !")
    Y_test = session.run(Y_estimation, feed_dict={X:INPUT_XOR})
    print(np.round(Y_test, decimals=1))

Below is the complete implementation.

Python3




# import tensorflow library
# Since we'll be using functionalities
# of tensorflow V1 Let us import Tensorflow v1
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
import numpy as np
 
# Create placeholders for input X and output Y
X = tf.placeholder(dtype=tf.float32, shape=(4, 2))
Y = tf.placeholder(dtype=tf.float32, shape=(4, 1))
 
# Give training input and label
INPUT_XOR = [[0,0],[0,1],[1,0],[1,1]]
OUTPUT_XOR = [[0],[1],[1],[0]]
 
# Give a standard learning rate and the number
# of epochs the model has to train for.
learning_rate = 0.01
epochs = 10000
 
# Create/Initialize a Hidden Layer variable
with tf.variable_scope('hidden'):
   
      # Initialize weights and biases for the
    # hidden layer randomly whose mean=0 and
    # std_dev=1
    h_w = tf.Variable(tf.truncated_normal([2, 2]), name='weights')
    h_b = tf.Variable(tf.truncated_normal([4, 2]), name='biases')
     
    # Pass the matrix multiplied Input and
    # weights added with Bias to the relu
    # activation function
    h = tf.nn.relu(tf.matmul(X, h_w) + h_b)
 
# Create/Initialize an Output Layer variable
with tf.variable_scope('output'):
       
    # Initialize weights and biases for the
    # output layer randomly whose mean=0 and
    # std_dev=1
    o_w = tf.Variable(tf.truncated_normal([2, 1]), name='weights')
    o_b = tf.Variable(tf.truncated_normal([4, 1]), name='biases')
     
    # Pass the matrix multiplied hidden layer
    # Input and weights added with Bias
    # to a sigmoid activation function
    Y_estimation = tf.nn.sigmoid(tf.matmul(h, o_w) + o_b)
 
# Create/Initialize Loss function variable
with tf.variable_scope('cost'):
   
      # Calculate cost by taking the Root Mean
    # Square between the estimated Y value
    # and the actual Y value
    cost = tf.reduce_mean(tf.squared_difference(Y_estimation, Y))
 
# Create/Initialize Training model variable
with tf.variable_scope('train'):
   
      # Train the model with ADAM Optimizer
    # with the previously initialized learning
    # rate and the cost from the previous variable
    train = tf.train.AdamOptimizer(learning_rate).minimize(cost)
 
# Start a Tensorflow Session
with tf.Session() as session:
   
    # initialize the session variables
    session.run(tf.global_variables_initializer())
    print("Training Started")
     
    # log count
    log_count_frac = epochs/10
    for epoch in range(epochs):
       
        # Training the base network
        session.run(train, feed_dict={X: INPUT_XOR, Y:OUTPUT_XOR})
 
        # log training parameters
        # Print cost for every 1000 epochs
        if epoch % log_count_frac == 0:
            cost_results = session.run(cost, feed_dict={X: INPUT_XOR, Y:OUTPUT_XOR})
            print("Cost of Training at epoch {0} is {1}".format(epoch, cost_results))
 
    print("Training Completed !")
    Y_test = session.run(Y_estimation, feed_dict={X:INPUT_XOR})
    print(np.round(Y_test, decimals=1))


 

 

Output:

 

Output of the above program

 



Last Updated : 12 Jul, 2021
Like Article
Save Article
Previous
Next
Share your thoughts in the comments
Similar Reads