I need to recover the pose of two cameras with different camera matrix. Searching the OpenCV docs, i found the method:
Mat cv::findEssentialMat (
InputArray points1,
InputArray points2,
InputArray cameraMatrix1,
InputArray distCoeffs1,
InputArray cameraMatrix2,
InputArray distCoeffs2,
...
)
But it says:
Note that this function assumes that points1 and points2 are feature points from cameras with the same camera matrix. If this assumption does not hold for your use case, use undistortPoints() with P = cv::NoArray() for both cameras to transform image points to normalized image coordinates, which are valid for the identity camera matrix. When passing these coordinates, pass the identity matrix for this parameter.
BTW: I am using OpenCV version 4.5.1
With help of the example code from THIS link, i tried to build it in c++. For testing i used two images from the same camera and tried the code using undistortPoints and the code without. But the results are not the same. What am i doing wrong? Or what do i misunderstand?
I got matched keypoints from two images/cameras. Also the camera matrices are known. Code extract for undistortPoints variant:
vector<cv::Point2f> leftNormalizedPoints, rightNormalizedPoints;
cv::undistortPoints(leftPoints, leftNormalizedPoints, leftK, leftDistortion);
cv::undistortPoints(rightPoints, rightNormalizedPoints, rightK, rightDistortion);
cv::Mat mask;
cv::Mat E = cv::findEssentialMat(leftNormalizedPoints,
rightNormalizedPoints, cv::Mat_<double>::eye(3,3), cv::Mat(),
cv::Mat_<double>::eye(3,3), cv::Mat(), cv::RANSAC, 0.999, 1.0, mask);
cv::Mat R, t;
cv::recoverPose(E, leftNormalizedPoints, rightNormalizedPoints, cv::Mat_<double>::eye(3,3), R, t, mask);
cv::Mat pose;
cv::hconcat(R, t, pose);
after all pose is:
[0.961603134791149, -0.01578174741733072, -0.2739896852224382, -0.5087070713805589;
0.097730029342125, 0.9525919588571922, 0.288127404606722, 0.8349480887048125;
0.256453217029193, -0.3038412354652899, 0.9175577644520827, -0.2099495289244435]
Code extract for normal variant:
cv::Mat mask;
cv::Mat E = cv::findEssentialMat(leftPoints,
rightPoints, leftK, leftDistortion,
rightK, rightDistortion, cv::RANSAC, 0.999, 1.0, mask);
cv::Mat R, t;
cv::recoverPose(E, leftPoints, rightPoints, leftK, R, t, mask);
cv::Mat pose;
cv::hconcat(R, t, pose);
after all pose is:
[0.9997403988434749, -0.00195556755472652, 0.02270045541015531, -0.982742576593385;
0.002068083200570161, 0.999985688646742, -0.004934119331409387, 0.1754503851901947;
-0.02269048153224316, 0.004979784858803584, 0.9997301354818684, 0.05860196658139444]
Maybe the docs are not on point. Here it says, that the undistortion is handled by findEssentialMat itself.