I'm trying to estimate my camera position with OpenCV's solvePnp() and Rodrigues() methods.
Reference for below code:
- obj_points: 3d real world points
- img_points: 2d camera pixel points took from the original (not undistorted) image
- mtx: intrinsic matrix
- dist: distortion coefficients
This is the first try with original (not undistorted) image points:
_, rvecs, tvecs = cv.solvePnP(
obj_points,
img_points,
mtx,
dist,
flags=cv.SOLVEPNP_ITERATIVE,
)
rotMat, _ = cv.Rodrigues(rvecs)
camera_position = -np.matrix(rotMat).T * np.matrix(tvecs)
In this case the camera position is correct, but when I try to do the same thing with the undistorted image the result change:
_, rvecs, tvecs = cv.solvePnP(
obj_points,
undistorted_img_points,
mtx,
0,
flags=cv.SOLVEPNP_ITERATIVE,
)
rotMat, _ = cv.Rodrigues(rvecs)
camera_position = -np.matrix(rotMat).T * np.matrix(tvecs)
The undistorted points are correct (tested plotting them on the undistorted image) and the distorsion coefficients are set to zero in this case. I've also tried to pass np.eye(3) instead of mtx but it didn't work either.
Why the camera position is not the same in both cases? Shouldn't it be or am I missing somwthing?