1

I'm working on stereo-vision with the stereoRectifyUncalibrated() method under OpenCV 3.0.

I calibrate my system with the following steps:

  1. Detect and match SURF feature points between images from 2 cameras
  2. Apply findFundamentalMat() with the matching paairs
  3. Get the rectifying homographies with stereoRectifyUncalibrated().

For each camera, I compute a rotation matrix as follows:

R1 = cameraMatrix[0].inv()*H1*cameraMatrix[0];

To compute 3D points, I need to get projection matrix but i don't know how i can estimate the translation vector.

I tried decomposeHomographyMat() and this solution https://stackoverflow.com/a/10781165/3653104 but the rotation matrix is not the same as what I get with R1.

When I check the rectified images with R1/R2 (using initUndistortRectifyMap() followed by remap()), the result seems correct (I checked with epipolar lines).

I am a little lost with my weak knowledge in vision. Thus if somebody could explain to me. Thank you :)

Community
  • 1
  • 1
Gamso
  • 73
  • 2
  • 7

1 Answers1

0

The code in the link that you have provided (https://stackoverflow.com/a/10781165/3653104) computes not the Rotation but 3x4 pose of the camera.

The last column of the pose is your Translation vector

Community
  • 1
  • 1
Gaurav Fotedar
  • 715
  • 1
  • 6
  • 11