What I am trying to do is overlay the image from the right camera to the image from the left camera. To do this, I think have to fine the rotation and translation matrix for the right camera with respect to the left camera. Then I would apply the translation and rotation matrix to the image on the right camera?
Assuming this is the correct way, I would get the rotation and translation matrix from cvStereoCalibrate(), but how would I use these matrices to give me "mapx" and "mapy" so I could use it for cvRemap() on the right image?
This is similar to getting the intrinsic and distortion from cvCalibrateCamera2() then using cvInitUndistortMap() to get mapx and mapy and finally using cvRemap() to get the undistorted image.
Is there a cvInitUndistortMap() equivalent for rotation and translation?
I don't need the images to appear as if they had been taken by two cameras that are row aligned. I want to calibrate it for the Microsoft Kinect such that I can match points from the depth stream to the video stream.
Thanks, Tyro