Open In App

Python OpenCV – drawMatchesKnn() Function

Improve
Improve
Like Article
Like
Save
Share
Report

OpenCV  (Open Source Computer Vision) is a free and open-source library of computer vision and machine learning algorithms designed to help developers build computer vision applications. It provides a wide range of tools and functions for tasks such as image and video processing, object detection and recognition, 3D reconstruction, and more.

One of the key features of OpenCV is its ability to process images and videos in real-time, making it an important tool for building applications that need to perform tasks such as object tracking and face recognition in real time. It also provides a number of machine-learning algorithms that can be used to train models for tasks such as object detection and classification.

The drawMatchesKnn() function in Python’s OpenCV library is used to draw the matches between the key points of two images. It takes the following arguments

The features detector refers to the method used to detect key points in the images and compute their descriptors. There are many different feature detectors available, each with its own strengths and weaknesses. Some common feature detectors include:

Each of these feature detectors has a different set of parameters that can be adjusted to optimize their performance. The choice of feature detector will depend on the specific characteristics of the images and the desired properties of the key points.

cv2.drawMatchesKnn(img1,
                   keypoints1,
                   img2,
                   keypoints2,
                   matches,
                   outImg,
                   matchColor=None,
                   singlePointColor=None,
                   matchesMask=None,
                   flags=None)
  • img1 and img2 are the two images that you want to draw the matches for.
  • keypoints1 and keypoints2 are the lists of key points detected in each image, as returned by a feature detector such as SIFT or SURF.
  • matches is a list of keypoint matches, as returned by a matcher such as FLANN or BFMatcher.
  • outImg is an optional output image that the matches will be drawn on. If this is not provided, a new image will be created.
  • matchColor is the color of the lines connecting the matched key points. If this is not provided, the default color is green.
  • singlePointColor is the color of the key points that are not part of any match. If this is not provided, the default color is blue.
  • matchesMask is an optional mask that specifies which matches to draw. If this is not provided, all matches will be drawn.
  • flags is an optional flag that controls the behavior of the function. The DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS flag can be used to exclude the single key points from the output image.
  • The drawMatchesKnn() function returns the output image with the matches drawn on it. You can then display the image using functions such as imshow(), as shown in the examples.

It is important to note that the drawMatchesKnn() function is not very efficient and may be slow for large numbers of matches. If you need to draw a large number of matches, you may want to consider using another method, such as drawing the matches manually using the line() function.

The appearance of the resulting image will depend on the specific parameters that you pass to the drawMatchesKnn() function, such as the colors of the lines and key points and the mask that specifies which matches to draw. You can customize the appearance of the image to suit your needs.

It is important to note that the drawMatchesKnn() function is not very efficient and may be slow for large numbers of matches. If you need to draw a large number of matches, you may want to consider using another method, such as drawing the matches manually using the line() function.

FLANN (Fast Library for Approximate Nearest Neighbors) is an efficient library for performing fast approximate nearest neighbor searches. It can be used to find the nearest neighbors of a set of query points in a large dataset, and it is particularly useful when the dataset is too large to fit in memory. FLANN works by constructing a data structure (such as a KD-tree or a hierarchical clustering tree) that allows it to quickly search through the dataset and find the nearest neighbors of a query point. It returns the approximate nearest neighbors, meaning that the returned neighbors may not be the true nearest neighbors, but they will be close to the true nearest neighbors.

BFMatcher, on the other hand, stands for Brute-Force Matcher. It is a simple and straightforward method for matching descriptors. It works by comparing each descriptor in one set with every descriptor in the other set, and it returns the matches with the lowest Euclidean distance. BFMatcher is easy to use and can be effective for small datasets, but it becomes inefficient as the dataset grows larger, because it has to compare every descriptor with every other descriptor. This makes it less suitable for large-scale applications where speed is a concern.

In summary, FLANN is a more efficient method for finding nearest neighbors in large datasets, while BFMatcher is a simpler method that can be used for small datasets.

Python3




import numpy as np
import cv2
#from matplotlib import pyplot as plt
  
  
# load the images
image1 = cv2.imread('Bhagavad-Gita.jpg')
image2 = cv2.imread('Geeta.jpg')
  
# img1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY)
# img2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)
  
# Initiate SIFT detector
sift = cv2.SIFT_create()
  
# find the keypoints and descriptors with SIFT
keypoint1, descriptors1 = sift.detectAndCompute(image1, None)
keypoint2, descriptors2 = sift.detectAndCompute(image2, None)
  
# finding nearest match with KNN algorithm
index_params = dict(algorithm=0, trees=20)
search_params = dict(checks=150)   # or pass empty dictionary
  
flann = cv2.FlannBasedMatcher(index_params, search_params)
  
Matches = flann.knnMatch(descriptors1, descriptors2, k=2)
  
# Need to draw only good matches, so create a mask
good_matches = [[0, 0] for i in range(len(Matches))]
  
# Good matches
for i, (m, n) in enumerate(Matches):
    if m.distance < 0.5*n.distance:
        good_matches[i] = [1, 0]
  
  
# Draw the matches using drawMatchesKnn()
Matched = cv2.drawMatchesKnn(image1,
                             keypoint1,
                             image2,
                             keypoint2,
                             Matches,
                             outImg=None,
                             matchColor=(0, 155, 0),
                             singlePointColor=(0, 255, 255),
                             matchesMask=good_matches,
                             flags=0
                             )
  
# Displaying the image 
cv2.imwrite('Match.jpg', Matched)


Output:

True
drawMatchKNN- Geeksforgeeks

drawMatchKNN

This code first loads the two images and detects the key points and descriptors using the SIFT feature detector. It then uses the FLANN matcher to find the nearest neighbors and filters the matches using the Lowe’s ratio test. Finally, it uses the drawMatchesKnn() function to draw the matches between the key points and displays the resulting image.

With BFMatcher

Python3




import numpy as np
import cv2
  
#load the images
image1 = cv2.imread('Bhagavad-Gita.jpg'
image2 = cv2.imread('Geeta.jpg'
  
#img1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY)
#img2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)
  
# Initiate SIFT detector
sift = cv2.SIFT_create()
  
# find the keypoints and descriptors with SIFT
keypoint1, descriptors1 = sift.detectAndCompute(image1, None)
keypoint2, descriptors2 = sift.detectAndCompute(image2, None)
  
  
#Initialize the BFMatcher for matching
BFMatch = cv2.BFMatcher()
Matches = BFMatch.knnMatch(descriptors1,descriptors2,k=2)
  
# Need to draw only good matches, so create a mask
good_matches = [[0,0] for i in range(len(Matches))]
  
# ratio test as per Lowe's paper
for i,(m,n) in enumerate(Matches):
    if m.distance < 0.5*n.distance:
        good_matches[i]=[1,0]
          
# Draw the matches using drawMatchesKnn()
Matched = cv2.drawMatchesKnn(image1,      
                             keypoint1,   
                             image2,
                             keypoint2,
                             Matches,
                             outImg = None,
                             matchColor = (0,0,255),  
                             singlePointColor = (0,255,255),
                             matchesMask = good_matches,
                             flags = 0
                            )
  
# Save the image 
cv2.imwrite('BFMatch.jpg', Matched)


Output:

True
DrawMatchesKnn with BFMatcher -Geeksforgeeks

DrawMatchesKnn with BFMatcher

Important Factor

There are several factors that can affect the accuracy of drawMatchesKnn:

  • Quality of the key points and descriptors: The accuracy of the matching process depends on the quality of the key points and descriptors used to represent the features in the images. If the key points are not distinctive or the descriptors are not robust, it will be more difficult to find accurate matches.
  • Choice of feature detector and descriptor extractor: The accuracy of the matching process can be influenced by the choice of feature detector and descriptor extractor. Different feature detectors and descriptor extractors will produce key points and descriptors with different properties, and some may be more suited to the specific characteristics of the images being matched.
  • Number of key points and descriptors: The more key points and descriptors that are used, the more information it is for the matching process to work with, which can improve the accuracy of the matches. However, using too many key points and descriptors can also slow down the matching process.
  • Presence of noise and other distractions in the images: Noise and other distractions in the images can make it more difficult to find accurate matches. Preprocessing the images to remove noise and distractions can help improve the accuracy of the matches.
  • Choice of matching method: Different matching methods (such as FLANN or BFMatcher) may have different levels of accuracy, depending on the specific characteristics of the images and the desired properties of the matches.


Last Updated : 05 Feb, 2023
Like Article
Save Article
Previous
Next
Share your thoughts in the comments
Similar Reads