5

I have two sets of corresponding points from two images. I have estimated the Essential matrix which encodes the transformation between the cameras:

E, mask = cv2.findEssentialMat(points1, points2, 1.0)

I've then extracted the rotation and translation components:

points, R, t, mask = cv2.recoverPose(E, points1, points2)

But how do I actually get the cameras matrices of the two cameras so I can use cv2.triangulatePoints to generate a little point cloud?

nickponline
  • 25,354
  • 32
  • 99
  • 167
  • What exactly have you tried? – DisappointedByUnaccountableMod Nov 25 '15 at 00:00
  • @barny The only thing I can think of it that you _can't_ get the positions of the camera so you have to assume that one camera is a 3x4 zero matrix and the other is R | t – nickponline Nov 25 '15 at 00:06
  • The title of the question and accepted answer do not match. Probably the title should be changed to how to estimate camera pose from two views. Estimating camera position is at least one step more where one camera is assumed to be at origin and relative position of other is estimated. – abggcv Oct 30 '18 at 03:22

1 Answers1

12

Here is what I did:

Input:

pts_l - set of n 2d points in left image. nx2 numpy float array
pts_r - set of n 2d points in right image. nx2 numpy float array

K_l - Left Camera matrix. 3x3 numpy float array
K_r - Right Camera matrix. 3x3 numpy float array

Code:

# Normalize for Esential Matrix calaculation
pts_l_norm = cv2.undistortPoints(np.expand_dims(pts_l, axis=1), cameraMatrix=K_l, distCoeffs=None)
pts_r_norm = cv2.undistortPoints(np.expand_dims(pts_r, axis=1), cameraMatrix=K_r, distCoeffs=None)

E, mask = cv2.findEssentialMat(pts_l_norm, pts_r_norm, focal=1.0, pp=(0., 0.), method=cv2.RANSAC, prob=0.999, threshold=3.0)
points, R, t, mask = cv2.recoverPose(E, pts_l_norm, pts_r_norm)

M_r = np.hstack((R, t))
M_l = np.hstack((np.eye(3, 3), np.zeros((3, 1))))

P_l = np.dot(K_l,  M_l)
P_r = np.dot(K_r,  M_r)
point_4d_hom = cv2.triangulatePoints(P_l, P_r, np.expand_dims(pts_l, axis=1), np.expand_dims(pts_r, axis=1))
point_4d = point_4d_hom / np.tile(point_4d_hom[-1, :], (4, 1))
point_3d = point_4d[:3, :].T

Output:

point_3d - nx3 numpy array
abggcv
  • 461
  • 5
  • 20
Yonatan Simson
  • 2,395
  • 1
  • 24
  • 35
  • 1
    Thanks Yonathan, is it correct the pts_l and pts_r which are you stated above as pts_l - 2d points in left image. nx3 numpy float array pts_r - 2d points in left image. nx3 numpy float array – Salih Karagoz Oct 10 '18 at 19:39
  • 1
    Yes. Note however that I prefer to always specify left and right instead of numbers. Less chances for getting confused – Yonatan Simson Oct 15 '18 at 12:56
  • Hi @YonatanSimson, I feel like there is something wrong with your `cv2.triangulatePoints`. You already had `pts_l` and `pts_r` normalized using camera matrix, you shouldn't multiply instrinsics with `M_l` and `M_r` again. Or you can use the original `pts_l` but with `P_l = np.dot(K_l, M_l)` inputed into the `triangulatePoints` – Microos Oct 20 '19 at 03:07
  • Hi @Microos, pts_l/r are non homogeneous 2d points. Their equivalent rays from the camera center in the coordinates are referred to as pts_l/r_norm. “normalized” means that the coordinates do not depend on the camera matrix. Their function is to extract the relative pose between cameras (i.e. the extrinsics). We need this for calculating the projection matrices. Then the projection matrices are used in the triangulation. How would you do it? – Yonatan Simson Oct 20 '19 at 06:45
  • 1
    I'm sorry if this is a silly question, but how do you get ```K_L``` and ```K_R```? – jihan1008 Nov 19 '19 at 04:55
  • 1
    @jihan1008 You will need to either use data from a calibrated camera or calibrate the camera yourself. How to calibrate intrinsics? https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_calib3d/py_calibration/py_calibration.html – Yonatan Simson Nov 20 '19 at 15:21
  • 1
    @YonatanSimson Could you please tell me what the line starting with point_4d does? I don't get it. – Radwa Khattab Feb 18 '20 at 13:01
  • 1
    @Rou it converts a 3D homogenous point to a 3D point. Many 3D geometric solutions do the maths in the homogeneous realm then one has to convert the homogenous point to a normal point by normalizing with the last value and dropping it – Yonatan Simson Feb 18 '20 at 13:47
  • 1
    Ah, got it. Thanks! – Radwa Khattab Feb 22 '20 at 10:02