Faces Blur in Videos using OpenCV in Python
Last Updated :
30 Dec, 2022
OpenCV is an open-source cross-platform library for various operating systems, including Windows, Linux, and macOS, for computer vision, machine learning, and image processing. With the help of OpenCV, we can easily process images and videos to recognize objects, faces, or even someone’s handwriting.
In this article, we will see how to blur faces in images and videos using OpenCV in Python.
Requirements
In addition to the OpenCV module and in order to recognize faces, we also need Haar Cascade Frontal Face Classifier, which needs to be downloaded. It is provided as an XML file and is used to detect faces in images and videos.
Make sure to download the Haar Cascade Frontal Face Classifier from this link: haarcascade_frontalface_default.xml.
Blur the faces in Images using OpenCV in Python
First, we will load an image that contains some faces so, that we can test our code. After that, we will convert it into RGB format and then detect faces using the haar cascade classifier. After this, we will get the bounding box coordinates by using which we will blur that particular region, and then we can show that image along with the original image.
Python3
import numpy as np
import cv2
import matplotlib.pyplot as plt
image = cv2.cvtColor(cv2.imread( 'image.png' ),
cv2.COLOR_BGR2RGB)
print ( 'Original Image' )
plt.imshow(image, cmap = "gray" )
plt.axis( 'off' )
plt.show()
cascade = cv2.CascadeClassifier( "haarcascade_frontalface_default.xml" )
face_data = cascade.detectMultiScale(image,
scaleFactor = 2.0 ,
minNeighbors = 4 )
for x, y, w, h in face_data:
image = cv2.rectangle(image, (x, y), (x + w, y + h),
( 255 , 0 , 0 ), 5 )
image[y:y + h, x:x + w] = cv2.medianBlur(image[y:y + h,
x:x + w],
35 )
print ( 'Blured Image' )
plt.imshow(image, cmap = "gray" )
plt.axis( 'off' )
plt.show()
|
Output:
Original image and the blurred face image
Blur the faces in Videos using OpenCV in Python
First, we will load a video that contains some faces so, that we can test our code. After that, we will convert it into grayscale and then detect faces using the haar cascade classifier. After this, we will get the bounding box coordinates by using which we will blur that particular region, and then we can show that video.
Python3
import cv2
cascade = cv2.CascadeClassifier( "haarcascade_frontalface_default.xml" )
video_capture = cv2.VideoCapture( 'video.mp4' )
while (video_capture.isOpened()):
ret, frame = video_capture.read()
gray_image = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
face_data = cascade.detectMultiScale(gray_image,scaleFactor = 1.3 ,minNeighbors = 5 )
for x, y, w, h in face_data:
image = cv2.rectangle(frame, (x, y),(x + w, y + h), ( 0 , 0 , 255 ), 5 )
image[y:y + h, x:x + w] = cv2.medianBlur(image[y:y + h, x:x + w], 35 )
cv2.imshow( 'face blurred' , frame)
key = cv2.waitKey( 1 )
if key = = ord ( 'q' ):
break
video_capture.release()
cv2.destroyAllWindows()
|
Output:
Like Article
Suggest improvement
Share your thoughts in the comments
Please Login to comment...