2

What I am trying to do is overlay the image from the right camera to the image from the left camera. To do this, I think have to fine the rotation and translation matrix for the right camera with respect to the left camera. Then I would apply the translation and rotation matrix to the image on the right camera?

Assuming this is the correct way, I would get the rotation and translation matrix from cvStereoCalibrate(), but how would I use these matrices to give me "mapx" and "mapy" so I could use it for cvRemap() on the right image?

This is similar to getting the intrinsic and distortion from cvCalibrateCamera2() then using cvInitUndistortMap() to get mapx and mapy and finally using cvRemap() to get the undistorted image.

Is there a cvInitUndistortMap() equivalent for rotation and translation?

I don't need the images to appear as if they had been taken by two cameras that are row aligned. I want to calibrate it for the Microsoft Kinect such that I can match points from the depth stream to the video stream.

Thanks, Tyro

SeriousTyro
  • 934
  • 3
  • 10
  • 16
  • This could interest you as well... http://stackoverflow.com/questions/6744094/cvreprojectimageto3d-3d-modelling-from-2d-images-issue/6750547#6750547 Julien, – jmartel Jul 28 '11 at 07:00

1 Answers1

0

What you are trying to do is called image rectification.

In a nut shell you need to find the Fundamental matrix relating the two cameras and then compute rectifying homographies to project the images into the same plane.

See this question and answer for an overview with a bit more detail.

Community
  • 1
  • 1
koan
  • 3,596
  • 2
  • 25
  • 35
  • I don't need to do stereo rectification. I don't need the images to appear as if they had been taken by two cameras that are row aligned. I want to calibrate it for the Microsoft Kinect such that I can match points from the depth stream to the video stream. – SeriousTyro Jul 27 '11 at 23:00
  • I think your question is very unclear. When stereo images are rectified then disparity estimation is a one dimensional search. Even with a very basic disparity search you should be able to match up a lot of the pixels to depth. "Depth stream" from Kinect is disparity, right ? – koan Jul 28 '11 at 08:57
  • Yea the depth stream from Kinect is the disparity but streams are a bit offset. If you overlap the RGB and Depth frames, they will not overlap. – SeriousTyro Aug 03 '11 at 00:56