OpenCV Panorama Stitching
This article describes how to “stitch” images using OpenCV and python.
Panorama is basically a photograph stretched horizontally without distortion.
We will stitch 3 images together in today’s discussion to create our own panorama. To achieve this, the images must have some common key-points between images. In other words, a small portion from consecutive images must overlap otherwise images cannot be stitched together. Once we have the relevant images ready, we can simply use the stitch() function which belongs to the Stitcher class in the OpenCV module. It takes an array of images as an argument and stitches them together. The function returns the resultant image and a bool value which is True if the stitching succeeds else it returns False.
Resize the input images according to your need. If the images are way too large then scaling them down is always a better option.
OpenCV’s stitching algorithm is similar to Lowe and Brown’s paper on Automatic Panoramic Image Stitching using Invariant Features. But here is the flowchart algorithm of opencv’s stitching class.
The panorama stitching algorithm can be divided into four basic fundamental steps. These steps are as follows:
- Detection of keypoints (points on the image) and extraction of local invariant descriptors (SIFT feature) from input images.
- Finding matched descriptors between the input images.
- Calculating the homography matrix using the RANSAC algorithm.
- The homography matrix is then applied to the image to wrap and fit those images and merge them into one.
Descriptors are vectors that describe the local surroundings around a specific keypoint.
Keypoints are found by calculating the difference of Gaussian blur of the image at different levels. That means the image is blurred with a gaussian blur at different magnitudes, from slightly blurred to more blurred and so on. Then those images are subtracted from each other resulting in the difference of images with different levels of Gaussian blurs. The resultant images are stacked upon each other to look for extreme points which are locally distinct, those are keypoints.
Descriptors are computed by looking at the neighborhood of the keypoint, breaking down the local neighborhood into small areas, and then computing the gradient in these small areas. These gradients are collected in the form of histograms. Basically, how often those gradients occur and their magnitude are turned into histograms for small local regions. Finally, concatenating all the values computed from the histogram will result in the descriptor vector. This as a whole is known as SIFT feature, which is nothing but a way for computing keypoints and descriptors.
Coming to the homography matrix, it maps the corresponding points of one image to another. This is essential in creating the panorama. It is necessary to compute the homography of two images as it helps to wrap one image onto the other, which is useful for creating panoramas. RANSAC algorithm helps to compute this homography matrix.
The actual stitching is done by the .stitch() function or method of the stitcher class. It implements the above steps. This function comes with the latest version of OpenCV. Stitch() function accepts a list of images as a parameter. It returns a tuple (status, output), where status is a bool value and it True when the stitching is done successfully else false. Output is the resultant panorama.
Attention geek! Strengthen your foundations with the Python Programming Foundation Course and learn the basics.
To begin with, your interview preparations Enhance your Data Structures concepts with the Python DS Course. And to begin with your Machine Learning Journey, join the Machine Learning - Basic Level Course