I'm currently working on a project to recover camera 6-DOF-Pose from two images by using SIFT/SURF. In old version of OpenCV, I use findFundamentalMat to find fundamental matrix, then further getting essential matrix with know camera intrinsic K and eventually get R and t by matrix decomposition. The result is very sensitive and unstable.
I saw some people have the same issue here OpenCV findFundamentalMat very unstable and sensitive
Some people suggest apply Nister's 5-point algorithm which has implemented in the latest version of OpenCV3.0.
I have read an examples from OpenCV documentation
In the example, it use focal = 1.0
and Point2d pp(0.0, 0.0)
.
Is this the real focal length and principle point of the camera? what are the unit? in pixel? or in actual size? I am having trouble to understand these two parameters. I think these two parameters should be acquired from a calibration routine, right?
For my current camera (VGA mode), I use Matlab Camera Calibrator to get these two parameters, and these parameters are
Focal length (millimeters): [ 1104 1102]
Principal point (pixels):[ 259 262]
So, if I want to use my camera parameters instead, should I need to directly fill in these values? or convert them to actual size, like millimeter?
Also, the translation result I get looks like a direction rather than actual size, is there any way I can get the actual size translation rather than a direction?
Any help is appreciated.