Open In App

Python OpenCV – drawMatchesKnn() Function

OpenCV  (Open Source Computer Vision) is a free and open-source library of computer vision and machine learning algorithms designed to help developers build computer vision applications. It provides a wide range of tools and functions for tasks such as image and video processing, object detection and recognition, 3D reconstruction, and more.

One of the key features of OpenCV is its ability to process images and videos in real-time, making it an important tool for building applications that need to perform tasks such as object tracking and face recognition in real time. It also provides a number of machine-learning algorithms that can be used to train models for tasks such as object detection and classification.



The drawMatchesKnn() function in Python’s OpenCV library is used to draw the matches between the key points of two images. It takes the following arguments

The features detector refers to the method used to detect key points in the images and compute their descriptors. There are many different feature detectors available, each with its own strengths and weaknesses. Some common feature detectors include:



Each of these feature detectors has a different set of parameters that can be adjusted to optimize their performance. The choice of feature detector will depend on the specific characteristics of the images and the desired properties of the key points.

cv2.drawMatchesKnn(img1,
                   keypoints1,
                   img2,
                   keypoints2,
                   matches,
                   outImg,
                   matchColor=None,
                   singlePointColor=None,
                   matchesMask=None,
                   flags=None)

It is important to note that the drawMatchesKnn() function is not very efficient and may be slow for large numbers of matches. If you need to draw a large number of matches, you may want to consider using another method, such as drawing the matches manually using the line() function.

The appearance of the resulting image will depend on the specific parameters that you pass to the drawMatchesKnn() function, such as the colors of the lines and key points and the mask that specifies which matches to draw. You can customize the appearance of the image to suit your needs.

It is important to note that the drawMatchesKnn() function is not very efficient and may be slow for large numbers of matches. If you need to draw a large number of matches, you may want to consider using another method, such as drawing the matches manually using the line() function.

FLANN (Fast Library for Approximate Nearest Neighbors) is an efficient library for performing fast approximate nearest neighbor searches. It can be used to find the nearest neighbors of a set of query points in a large dataset, and it is particularly useful when the dataset is too large to fit in memory. FLANN works by constructing a data structure (such as a KD-tree or a hierarchical clustering tree) that allows it to quickly search through the dataset and find the nearest neighbors of a query point. It returns the approximate nearest neighbors, meaning that the returned neighbors may not be the true nearest neighbors, but they will be close to the true nearest neighbors.

BFMatcher, on the other hand, stands for Brute-Force Matcher. It is a simple and straightforward method for matching descriptors. It works by comparing each descriptor in one set with every descriptor in the other set, and it returns the matches with the lowest Euclidean distance. BFMatcher is easy to use and can be effective for small datasets, but it becomes inefficient as the dataset grows larger, because it has to compare every descriptor with every other descriptor. This makes it less suitable for large-scale applications where speed is a concern.

In summary, FLANN is a more efficient method for finding nearest neighbors in large datasets, while BFMatcher is a simpler method that can be used for small datasets.




import numpy as np
import cv2
#from matplotlib import pyplot as plt
  
  
# load the images
image1 = cv2.imread('Bhagavad-Gita.jpg')
image2 = cv2.imread('Geeta.jpg')
  
# img1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY)
# img2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)
  
# Initiate SIFT detector
sift = cv2.SIFT_create()
  
# find the keypoints and descriptors with SIFT
keypoint1, descriptors1 = sift.detectAndCompute(image1, None)
keypoint2, descriptors2 = sift.detectAndCompute(image2, None)
  
# finding nearest match with KNN algorithm
index_params = dict(algorithm=0, trees=20)
search_params = dict(checks=150)   # or pass empty dictionary
  
flann = cv2.FlannBasedMatcher(index_params, search_params)
  
Matches = flann.knnMatch(descriptors1, descriptors2, k=2)
  
# Need to draw only good matches, so create a mask
good_matches = [[0, 0] for i in range(len(Matches))]
  
# Good matches
for i, (m, n) in enumerate(Matches):
    if m.distance < 0.5*n.distance:
        good_matches[i] = [1, 0]
  
  
# Draw the matches using drawMatchesKnn()
Matched = cv2.drawMatchesKnn(image1,
                             keypoint1,
                             image2,
                             keypoint2,
                             Matches,
                             outImg=None,
                             matchColor=(0, 155, 0),
                             singlePointColor=(0, 255, 255),
                             matchesMask=good_matches,
                             flags=0
                             )
  
# Displaying the image 
cv2.imwrite('Match.jpg', Matched)

Output:

True

drawMatchKNN

This code first loads the two images and detects the key points and descriptors using the SIFT feature detector. It then uses the FLANN matcher to find the nearest neighbors and filters the matches using the Lowe’s ratio test. Finally, it uses the drawMatchesKnn() function to draw the matches between the key points and displays the resulting image.

With BFMatcher




import numpy as np
import cv2
  
#load the images
image1 = cv2.imread('Bhagavad-Gita.jpg'
image2 = cv2.imread('Geeta.jpg'
  
#img1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY)
#img2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)
  
# Initiate SIFT detector
sift = cv2.SIFT_create()
  
# find the keypoints and descriptors with SIFT
keypoint1, descriptors1 = sift.detectAndCompute(image1, None)
keypoint2, descriptors2 = sift.detectAndCompute(image2, None)
  
  
#Initialize the BFMatcher for matching
BFMatch = cv2.BFMatcher()
Matches = BFMatch.knnMatch(descriptors1,descriptors2,k=2)
  
# Need to draw only good matches, so create a mask
good_matches = [[0,0] for i in range(len(Matches))]
  
# ratio test as per Lowe's paper
for i,(m,n) in enumerate(Matches):
    if m.distance < 0.5*n.distance:
        good_matches[i]=[1,0]
          
# Draw the matches using drawMatchesKnn()
Matched = cv2.drawMatchesKnn(image1,      
                             keypoint1,   
                             image2,
                             keypoint2,
                             Matches,
                             outImg = None,
                             matchColor = (0,0,255),  
                             singlePointColor = (0,255,255),
                             matchesMask = good_matches,
                             flags = 0
                            )
  
# Save the image 
cv2.imwrite('BFMatch.jpg', Matched)

Output:

True

DrawMatchesKnn with BFMatcher

Important Factor

There are several factors that can affect the accuracy of drawMatchesKnn:


Article Tags :