1

I need to recover the pose of two cameras with different camera matrix. Searching the OpenCV docs, i found the method:

Mat cv::findEssentialMat (
InputArray points1, InputArray points2, InputArray cameraMatrix1, InputArray distCoeffs1, InputArray cameraMatrix2, InputArray distCoeffs2, ... )

But it says:

Note that this function assumes that points1 and points2 are feature points from cameras with the same camera matrix. If this assumption does not hold for your use case, use undistortPoints() with P = cv::NoArray() for both cameras to transform image points to normalized image coordinates, which are valid for the identity camera matrix. When passing these coordinates, pass the identity matrix for this parameter.

BTW: I am using OpenCV version 4.5.1

With help of the example code from THIS link, i tried to build it in c++. For testing i used two images from the same camera and tried the code using undistortPoints and the code without. But the results are not the same. What am i doing wrong? Or what do i misunderstand?

I got matched keypoints from two images/cameras. Also the camera matrices are known. Code extract for undistortPoints variant:

vector<cv::Point2f> leftNormalizedPoints, rightNormalizedPoints;
cv::undistortPoints(leftPoints, leftNormalizedPoints, leftK, leftDistortion);
cv::undistortPoints(rightPoints, rightNormalizedPoints, rightK, rightDistortion);

cv::Mat mask;
cv::Mat E = cv::findEssentialMat(leftNormalizedPoints,
                                 rightNormalizedPoints, cv::Mat_<double>::eye(3,3), cv::Mat(),
                                 cv::Mat_<double>::eye(3,3), cv::Mat(), cv::RANSAC, 0.999, 1.0, mask);

cv::Mat R, t;
cv::recoverPose(E, leftNormalizedPoints, rightNormalizedPoints, cv::Mat_<double>::eye(3,3), R, t, mask);

cv::Mat pose;
cv::hconcat(R, t, pose);

after all pose is:
[0.961603134791149, -0.01578174741733072, -0.2739896852224382, -0.5087070713805589;
 0.097730029342125, 0.9525919588571922, 0.288127404606722, 0.8349480887048125;
 0.256453217029193, -0.3038412354652899, 0.9175577644520827, -0.2099495289244435]

Code extract for normal variant:

cv::Mat mask;
cv::Mat E = cv::findEssentialMat(leftPoints,
                                 rightPoints, leftK, leftDistortion,
                                 rightK, rightDistortion, cv::RANSAC, 0.999, 1.0, mask);
cv::Mat R, t;
cv::recoverPose(E, leftPoints, rightPoints, leftK, R, t, mask);

cv::Mat pose;
cv::hconcat(R, t, pose);

after all pose is:
[0.9997403988434749, -0.00195556755472652, 0.02270045541015531, -0.982742576593385;
 0.002068083200570161, 0.999985688646742, -0.004934119331409387, 0.1754503851901947;
 -0.02269048153224316, 0.004979784858803584, 0.9997301354818684, 0.05860196658139444]

Maybe the docs are not on point. Here it says, that the undistortion is handled by findEssentialMat itself.

brunothg
  • 11
  • 3
  • I think with intrinsics and lens distortion parameters it should be possible to undistort and "change" the image to look as if it would've been captured by a camera with different intrinsics (e.g. the ones from your other cam). – Micka Feb 20 '21 at 21:52
  • @Micka I agree. Here it should be the identity camera matrix. But it should not change the extrinsics (i thought). But this is happening here. – brunothg Feb 20 '21 at 21:56
  • I believe this is supported starting from opencv 4.5.5 https://github.com/opencv/opencv/blob/dad26339a975b49cfb6c7dbe4bd5276c9dcb36e2/modules/calib3d/src/five-point.cpp#L535 – gisil May 02 '22 at 05:17

0 Answers0