Skip to content
Related Articles

Related Articles

Improve Article

Implementation of Perceptron Algorithm for NAND Logic Gate with 2-bit Binary Input

  • Last Updated : 08 Jul, 2020
Geek Week


In the field of Machine Learning, the Perceptron is a Supervised Learning Algorithm for binary classifiers. The Perceptron Model implements the following function:

    \[ \begin{array}{c} \hat{y}=\Theta\left(w_{1} x_{1}+w_{2} x_{2}+\ldots+w_{n} x_{n}+b\right) \\ =\Theta(\mathbf{w} \cdot \mathbf{x}+b) \\ \text { where } \Theta(v)=\left\{\begin{array}{cc} 1 & \text { if } v \geqslant 0 \\ 0 & \text { otherwise } \end{array}\right. \end{array} \]

For a particular choice of the weight vector $\boldsymbol{w}$ and bias parameter $\boldsymbol{b}$, the model predicts output $\boldsymbol{\hat{y}}$ for the corresponding input vector $\boldsymbol{x}$.

NAND logical function truth table for 2-bit binary variables, i.e, the input vector $\boldsymbol{x} : (\boldsymbol{x_{1}}, \boldsymbol{x_{2}})$ and the corresponding output $\boldsymbol{y}$

$\boldsymbol{x_{1}}$$\boldsymbol{x_{2}}$$\boldsymbol{y}$
001
011
101
110

We can observe that, $NAND(\boldsymbol{x_{1}}, \boldsymbol{x_{2}}) = NOT(AND(\boldsymbol{x_{1}}, \boldsymbol{x_{2}}))$
Now for the corresponding weight vector $\boldsymbol{w} : (\boldsymbol{w_{1}}, \boldsymbol{w_{2}})$ of the input vector $\boldsymbol{x} : (\boldsymbol{x_{1}}, \boldsymbol{x_{2}})$ to the AND node, the associated Perceptron Function can be defined as:



    \[$\boldsymbol{\hat{y}\prime} = \Theta\left(w_{1} x_{1}+w_{2} x_{2}+b_{AND}\right)$ \]

Later on, the output of AND node $\boldsymbol{\hat{y}\prime}$ is the input to the NOT node with weight $\boldsymbol{w_{NOT}}$. Then the corresponding output $\boldsymbol{\hat{y}}$ is the final output of the NAND logic function and the associated Perceptron Function can be defined as:

    \[$\boldsymbol{\hat{y}} = \Theta\left(w_{NOT}  \boldsymbol{\hat{y}\prime}+b_{NOT}\right)$\]


For the implementation, considered weight parameters are $\boldsymbol{w_{1}} = 1, \boldsymbol{w_{2}} = 1, \boldsymbol{w_{NOT}} = -1$ and the bias parameters are $\boldsymbol{b_{AND}} = -1.5, \boldsymbol{b_{NOT}} = 0.5$.

Python Implementation:




# importing Python library
import numpy as np
  
# define Unit Step Function
def unitStep(v):
    if v >= 0:
        return 1
    else:
        return 0
  
# design Perceptron Model
def perceptronModel(x, w, b):
    v = np.dot(w, x) + b
    y = unitStep(v)
    return y
  
# NOT Logic Function
# wNOT = -1, bNOT = 0.5
def NOT_logicFunction(x):
    wNOT = -1
    bNOT = 0.5
    return perceptronModel(x, wNOT, bNOT)
  
# AND Logic Function
# w1 = 1, w2 = 1, bAND = -1.5
def AND_logicFunction(x):
    w = np.array([1, 1])
    bAND = -1.5
    return perceptronModel(x, w, bAND)
  
# NAND Logic Function
# with AND and NOT  
# function calls in sequence
def NAND_logicFunction(x):
    output_AND = AND_logicFunction(x)
    output_NOT = NOT_logicFunction(output_AND)
    return output_NOT
  
# testing the Perceptron Model
test1 = np.array([0, 1])
test2 = np.array([1, 1])
test3 = np.array([0, 0])
test4 = np.array([1, 0])
  
print("NAND({}, {}) = {}".format(0, 1, NAND_logicFunction(test1)))
print("NAND({}, {}) = {}".format(1, 1, NAND_logicFunction(test2)))
print("NAND({}, {}) = {}".format(0, 0, NAND_logicFunction(test3)))
print("NAND({}, {}) = {}".format(1, 0, NAND_logicFunction(test4)))
Output:
NAND(0, 1) = 1
NAND(1, 1) = 0
NAND(0, 0) = 1
NAND(1, 0) = 1

Here, the model predicted output ($\boldsymbol{\hat{y}}$) for each of the test inputs are exactly matched with the NAND logic gate conventional output ($\boldsymbol{y}$) according to the truth table for 2-bit binary input.
Hence, it is verified that the perceptron algorithm for NAND logic gate is correctly implemented.

Attention reader! Don’t stop learning now. Get hold of all the important Machine Learning Concepts with the Machine Learning Foundation Course at a student-friendly price and become industry ready.




My Personal Notes arrow_drop_up
Recommended Articles
Page :