Open In App

Facial Expression Recognizer using FER – Using Deep Neural Net

Improve
Improve
Like Article
Like
Save
Share
Report

In our daily life, we knowingly or unknowingly carry out different types of Facial Expressions. These movements convey the emotional state of humans.

We can judge the mood and mental state of the next person by his Facial Expression. In the early Twentieth century, Ekman and Friesen defined `six` basic emotions.

This expression does not change with cultures, they are universal. Six Facial Expressions are:-

  • Happiness
  • Sadness
  • Fear
  • Anger
  • Surprise
  • Disgust

In this article, I’ll share how to build a Facial Expression Recognizer using the `FER` library from python.

FER Library: 

Facial Expression Recognition Library is developed by Justin Shenk. This Library requires OpenCV>=3.2 and Tensorflow>=1.7.0 dependencies installed in the system. Faces are detected using OpenCV’s Haar Cascade classifier. For more information and the Source code of FER Library, you can visit FER’s GitHub page here.

 Setting up our code!

For this article, you can use an online code editor called Repl.it, or you can use your favorite code editor. It can be installed through  

pip:`$ pip install fer`

1. Edit your new `main.py` file with the following code:

Python3




import cv2
 
from fer import FER
 
import matplotlib.pyplot as plt
 
import matplotlib.image as mpimg


The code above says that:

  • Import complete `cv2` commonly known as Open CV library. It provides us various computer vision solutions. We will use this library mainly for reading and manipulating images. You can read more about `Open Cv library` on the OpenCV Docs 
  • Import `FER` function from `fer` library. This is the main library we will be using in this workshop.
  • Add `matplotlib.pyplot` module as `plt`. It will help us to plot graphs
  • Include `matplotlib.image` module as `mpimg`. The matplotlib Library’s matplotlib.image  is used to plot image on the graph with `x` and `y` axis.

 2. Next, we’ll predict emotions of Still Images by giving Input Image:

Python3




# Input Image
input_image = cv2.imread("smile.jpg")
emotion_detector = FER()
# Output image's information
print(emotion_detector.detect_emotions(input_image))



Input Image

 Give a try to understand what the code above is trying to say. Here is the explanation of the above code:

  • Our `image` variable loads an image from specified file in our case `abc.jpg` by using `cv2.imread(“abc.jpg”)`method.
  • Next, we will assign `FER()` which was imported earlier to the **detector** variable.
  • Print syntax with `detector.detect_emotions(image)` as parameter will return dictionary of bounding box notations.

Below is Sample Output:

Python3




[{'box': [277, 90, 48, 63], 'emotions': {
  'angry': 0.02, 'disgust': 0.0, 'fear': 0.05,
  'happy': 0.16, 'neutral': 0.09, 'sad': 0.27, 'surprise': 0.41}]


What is going on under hood???

The `detector` is the main part of our code. Let’s see what the detector do:

A detector is an object created to store `FER()`. Detector returns an Ordered Dictionary of Bounding Box Notations. It observes and detects where the face is situated and classifies it in decimal values from `0` to `1` with a probability of all 6 emotions respectively. FER exhibit Keras model built with CNN or well know as Convolutional Neural Networks. It stores the values in the `HDF5` model. FER by default detects faces using OpenCV’s  Haar Cascade Classifier. Alternatively, we can use a more advanced Multi-Cascade Convolutional Network in short `MTCNN`. It will use Peltarion API in the backend in place of the Keras model. By default, FER uses CNN, but if we want to use advanced MTCNN just pass `mtcnn=True` in FER parentheses just like `detector = FER(mtcnn=True)`. For further use we will use the MTCNN model for more accurate results.

Data:

Data is known as the new oil of the Twenty-First Century. We can do different things with the data generated above which we will discuss in the `Hacking Section` at last so stay tuned. We will store the emotion and detected face. Then display bounding yellow box around face detected and scores below it. Let’s see below how we can do this. 

Python3




import cv2
from fer import FER
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
input_image = cv2.imread("smile.jpg")
emotion_detector = FER(mtcnn=True)


Until now we have reached till here.

Next, we will declare the new variable `result` and store our output in a separate array.

Python3




# Save output in result variable
result = emotion_detector.detect_emotions(input_image)


Now it is time to Highlight Yellow Bounding Box around face:

Python3




bounding_box = result[0]["box"]
emotions = result[0]["emotions"]
cv2.rectangle(input_image,(
  bounding_box[0], bounding_box[1]),(
  bounding_box[0] + bounding_box[2], bounding_box[1] + bounding_box[3]),
              (0, 155, 255), 2,)


An explanation of the above code is:

  • The `bounding_box` variable will store bounding box coordinates that were generated and stored in the `result` variable. It will access and start storing from its `0th` elements (or the first element in common language) of the `”box”` key. If you print this variable you will get output something like this `(298, 361, 825, 825)`.
  • Same with `emotions` variable it will access `result` and start storing from it’s `0th` element of `”emotions”` key. If you print, you will get output as `{‘angry’: 0.08, ‘disgust’: 0.03, ‘fear’: 0.05, ‘happy’: 0.09, ‘sad’: 0.35, ‘surprise’: 0.0, ‘neutral’: 0.38}`
  • `cv2.rectangle()` method is used to draw a rectangle around detected human face .It have `input_image, start_point, end_point, color, thickness` as its parameters.
  • In our case we have used `image` as parameter for `image` to specify the image we are going to use. `start_point` as `(bounding_box[0], bounding_box[1])` where `bounding_box[0]` is X coordinate and `bounding_box[1]` is Y coordinate of start point.
  • Next `end_point` as `(bounding_box[0] + bounding_box[2], bounding_box[1] + bounding_box[3])` where `bounding_box[0] + bounding_box[2]` is X coordinate and `bounding_box[1] + bounding_box[3]` is Y coordinate of end point.
  • * `(0, 155, 255)` is the yellow color we are going to use as a box color.
  • * `thickness` is given as `2`. To define the thickness of 2px to box.

Add Score to Bounding Box

Now we will add Score to Bounding Box by using the following code:

Python3




emotion_name, score = emotion_detector.top_emotion(image )
for index, (emotion_name, score) in enumerate(emotions.items()):
   color = (211, 211,211) if score < 0.01 else (255, 0, 0)
   emotion_score = "{}: {}".format(emotion_name, "{:.2f}".format(score))
 
   cv2.putText(input_image,emotion_score,
               (bounding_box[0], bounding_box[1] + bounding_box[3] + 30 + index * 15),
               cv2.FONT_HERSHEY_SIMPLEX,0.5,color,1,cv2.LINE_AA,)
 
#Save the result in new image file
cv2.imwrite("emotion.jpg", input_image)


The above code says:

  • Store the top emotion values in `emotion_name,score` variable by using `detector.top_emotion(img)` method.
  • We will iterate over the index and value of an item in a list by using a basic `for` loop and `enumerate` method. So now 1 emotion is iterated at a given time. This process takes place 6 times as we have 6 emotions to test. You can read docs.
  • Colors are assigned to `color` variable based upon scores. The shorthand of the IF & Else statement is used for a better understanding of code.
  • * `emotion_score` variable stores the emotion score and will be used to print it.
  • * `cv2.putText()` method is used to draw a text string on any image.
  • Outside for loop `cv2.imwrite()`method will create and save new final output image of name `”emotion.jpg”`.

Display Output Image

Python3




# Read image file using matplotlib's image module
result_image = mpimg.imread('emotion.jpg')
imgplot = plt.imshow(result_image)
# Display Output Image
plt.show()


We at the final step of ours article. Where we display output image by using matplotlib Library functions `mpimg.imread(’emotion.jpg’)` method read the image provided in parentheses. Store the image image in `imgplot` by using `plt.imshow(img)`. And Finally `plt.show()` will display our output image.

Output Image

Our Facial Expression Recognizer is just starting point for new and innovative FER systems. The time has come where you can build something new by what you have learned until now. Here are some of my ideas for which you can try:

  1. We have build FER for still images. You can try FER for video analysis of facial emotions present in the video.
  2. Creating a real-time Facial Expression Recognizer using a Live Camera will be a very exciting thing to try.
  3. You can create a different recommendation system based upon the emotion of Humans by extracting and manipulating data from the `result` variable. Like recommending fun books, videos, gifs, etc to people who have sad emotions or motivation books, videos for people in fear or anger.


Last Updated : 21 Sep, 2021
Like Article
Save Article
Previous
Next
Share your thoughts in the comments
Similar Reads