0

I'm trying to estimate my camera position with OpenCV's solvePnp() and Rodrigues() methods.

Reference for below code:

  • obj_points: 3d real world points
  • img_points: 2d camera pixel points took from the original (not undistorted) image
  • mtx: intrinsic matrix
  • dist: distortion coefficients

This is the first try with original (not undistorted) image points:

_, rvecs, tvecs = cv.solvePnP(
        obj_points,
        img_points,
        mtx,
        dist,
        flags=cv.SOLVEPNP_ITERATIVE,
    )

rotMat, _ = cv.Rodrigues(rvecs)

camera_position = -np.matrix(rotMat).T * np.matrix(tvecs)

In this case the camera position is correct, but when I try to do the same thing with the undistorted image the result change:

_, rvecs, tvecs = cv.solvePnP(
        obj_points,
        undistorted_img_points,
        mtx,
        0,
        flags=cv.SOLVEPNP_ITERATIVE,
    )

rotMat, _ = cv.Rodrigues(rvecs)

camera_position = -np.matrix(rotMat).T * np.matrix(tvecs)

The undistorted points are correct (tested plotting them on the undistorted image) and the distorsion coefficients are set to zero in this case. I've also tried to pass np.eye(3) instead of mtx but it didn't work either.

Why the camera position is not the same in both cases? Shouldn't it be or am I missing somwthing?

  • 1
    Not 100% sure but I think you have to change the camera matrix. After distortion correction, the principal point should be in the center of the image and the pixel sizes might have changed. I think there is some getOptimalNewCameraMatrix (or similar) function. Maybe try that one. – Micka Nov 24 '22 at 22:57
  • Have a look at this and the other answers: https://stackoverflow.com/a/65729065/2393191 – Micka Nov 24 '22 at 22:59
  • @Micka good point: I've replaced mtx with the output from `cv.getOptimalNewCameraMatrix` and it worked! The position results are not the same now but very close (around 30 centimeters different) – Marco Ghigo Nov 25 '22 at 09:07

0 Answers0