The concept of extrinsic parameters of a camera seem fuzzy to me when there is only one viewing plane. What is the rotation matrix and translation vector relative to when we only have one image? Why wouldn't this always be the origin?
Asked
Active
Viewed 335 times
4
-
please elaborate - what do you mean by "only one image", for example? – Alnitak Apr 09 '11 at 15:22
-
For example, in opencv you can run cvCalibrateCamera2 (which calibrates the camera. gets intrinsic matrix, extrinsic matrix, distortion parameters) on correspondences from a single image. The retrieved parameters include a rotation and translation vector – MarkBiz Apr 09 '11 at 15:26
-
ah, so you're talking about image decoding, rather than 3D image construction? – Alnitak Apr 09 '11 at 15:29
-
Actually, I am talking about 3D image reconstruction. All in all, I am trying to figure out how to find rotation/translation between 2 3D cameras (where i have relative depth in the images for both cameras). This was an extra thought – MarkBiz Apr 09 '11 at 15:32
-
Have a look at this: http://stackoverflow.com/questions/3712049/how-to-use-an-opencv-rotation-and-translation-vector-with-opengl-es-in-android – coder9 Nov 16 '11 at 08:49
1 Answers
1
It appears you are recovering extrinsic parameters (R+T, 6 DOF) from one image of a known object (calibration target). If this is true, then the recovered parameters correspond to the camera pose relative to the intrinsic coordinate system of the calibration target.
For instance, if you are viewing a Zhang's planar target, and if you denote the target point coordinates as (0,0), (0,1), (0,2), ..., (1,0), (1,1), etc, then the recovered camera pose is relative to the coordinate system with origin at (0,0), and whose axes are defined by the vectors e1((0,0),(0,1)), e2((0,0),(1,0)) and e3 = e1 x e2.

ssegvic
- 3,123
- 1
- 20
- 21