Open In App

calibrateHandEye() Python openCV

Last Updated : 17 Apr, 2023
Improve
Improve
Like Article
Like
Save
Share
Report

OpenCV is an open-source computer vision and machine learning software library widely used for various applications. It provides various functions and utilities for image processing and computer vision tasks, including the calibrateHandEye() function. The calibrateHandEye() function is used to find the transformation matrix between a robotic hand and its attached eye, such as a camera. This article discusses concepts related to the calibrateHandEye function of opencv are discussed along with examples. 

Prerequisites:

Explanation of concepts

The calibrateHandEye() function determines the transformation matrix between a hand and its eye. The transformation matrix is a 3×4 matrix representing the relationship between the two systems in translation and rotation. In the context of a robotic hand, the hand coordinates are in the hand’s coordinate system, and the eye coordinates are in the eye’s coordinate system. The calibrateHandEye() function takes two arrays as input: one representing the hand coordinates and the other representing the eye coordinates. It then returns the transformation matrix that maps one system to the other.

You must have a set of corresponding points in both systems to use the function. These points should be in the same units (e.g., millimeters or meters). You also need at least 4 points to get an accurate result. The more points you have, the more accurate the result will be.

You need to define the coordinates of the hand and eye in both systems and pass them to the calibrateHandEye() function as arrays. The function will then return the transformation matrix:

hand_coords = np.array([[x1, y1, z1], [x2, y2, z2], … [xn, yn, zn]])
eye_coords = np.array([[x1, y1, z1], [x2, y2, z2], … [xn, yn, zn]])

T, _ = cv2.calibrateHandEye(hand_coords, eye_coords)

Once you have the transformation matrix, you can use it to transform coordinates from one system to the other like this:

transformed_coords = T @ hand_coords

or

transformed_coords = T @ eye_coords

Examples

Python3




import cv2
import numpy as np
  
hand_coords = np.array([[0.0, 0.0, 0.0], [0.0, 1.0, 0.0], [
                       1.0, 1.0, 0.0], [1.0, 0.0, 0.0]])
  
eye_coords = np.array([[0.0, 0.0, 0.0], [0.0, 1.0, 0.0],
                       [1.0, 1.0, 0.0], [1.0, 0.0, 0.0]])
  
# rotation matrix between the target and camera
R_target2cam = np.array([[1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [
                        0.0, 0.0, 1.0], [0.0, 0.0, 0.0]])
  
# translation vector between the target and camera
t_target2cam = np.array([0.0, 0.0, 0.0, 0.0])
  
# transformation matrix
T, _ = cv2.calibrateHandEye(hand_coords, eye_coords,
                            R_target2cam, t_target2cam)
  
print(T)


Output:

the output of the above code

Explanation:

Firstly the relevant libraries are imported. Then two arrays of coordinates for hand and eye positions are initialized. It then defines a rotation matrix (R_target2cam) and translation vector (t_target2cam) between the target and the camera. The calibrateHandEye function is then used to find the transformation matrix (T) between the hand and eye coordinate systems. The resulting T matrix is then printed. The calibrateHandEye function is used to calibrate the hand-eye relationship for robotic systems, where the position of a hand (tool or end effector) is related to the position of an eye (camera or sensor). The function computes the transformation between the two coordinate systems from corresponding points in both spaces.



Like Article
Suggest improvement
Share your thoughts in the comments

Similar Reads