I'm working on stereo-vision with the stereoRectifyUncalibrated()
method under OpenCV 3.0.
I calibrate my system with the following steps:
- Detect and match SURF feature points between images from 2 cameras
- Apply
findFundamentalMat()
with the matching paairs - Get the rectifying homographies with
stereoRectifyUncalibrated()
.
For each camera, I compute a rotation matrix as follows:
R1 = cameraMatrix[0].inv()*H1*cameraMatrix[0];
To compute 3D points, I need to get projection matrix but i don't know how i can estimate the translation vector.
I tried decomposeHomographyMat()
and this solution https://stackoverflow.com/a/10781165/3653104 but the rotation matrix is not the same as what I get with R1.
When I check the rectified images with R1/R2 (using initUndistortRectifyMap()
followed by remap()
), the result seems correct (I checked with epipolar lines).
I am a little lost with my weak knowledge in vision. Thus if somebody could explain to me. Thank you :)