4

I'm in the situation where I need to find the relative camera poses between two/or more cameras based on image correspondences (so the cameras are not in the same point). To solve this I tried the same approach as described here (code below).

cv::Mat calibration_1 = ...;
cv::Mat calibration_2 = ...;
cv::Mat calibration_target = calibration_1;
calibration_target.at<float>(0, 2) = 0.5f * frame_width;  // principal point
calibration_target.at<float>(1, 2) = 0.5f * frame_height; // principal point

auto fundamental_matrix = cv::findFundamentalMat(left_matches, right_matches, CV_RANSAC);
fundamental_matrix.convertTo(fundamental_matrix, CV_32F);
cv::Mat essential_matrix = calibration_2.t() * fundamental_matrix * calibration_1;
cv::SVD svd(essential_matrix);
cv::Matx33f w(0,-1,0,
    1,0,0,
    0,0,1);
cv::Matx33f w_inv(0,1,0,
    -1,0,0,
    0,0,1);
cv::Mat rotation_between_cameras = svd.u * cv::Mat(w) * svd.vt; //HZ 9.19

But in most of my cases I get extremly weird results. So my next thought was using a full fledged bundle adjuster (which should do what i am looking for?!). Currently my only big dependency is OpenCV and they only have a undocumented bundle adjustment implementation.

So the question is:

  • Is there a bundle adjuster which has no dependencies and uses a licence which allows commerical use?
  • Are there other easy way to find the extrinsics?
  • Are objects with very different distances to the cameras a problem? (heavy parallax)

Thanks in advance

Daniel
  • 2,993
  • 2
  • 23
  • 40
  • Can you provide more details? Do, both cameras have a common Field of View (FOV) ? – satishffy Nov 26 '12 at 16:49
  • The cameras are the same model - but not identical - so the intrinsics could be slightly different. Ideally the the intrinsics get adjusted too but this is currently not my priority – Daniel Nov 26 '12 at 17:59
  • My question was whether both cameras have a common area that can be imaged. If yes, you can place a chessboard in that common area and find the relative pose between cameras. – satishffy Nov 27 '12 at 16:45
  • The cameras have an overlap of ~30% of the horizontal FOV but i cant use chessboards or any other pattern - my input are feature correspondences – Daniel Nov 28 '12 at 14:47

2 Answers2

1

I'm also working on same problem and facing slimier issues. Here are some suggestions -

  1. Modify Essential Matrix Before Decomposition: Modify Essential matrix before decomposition [U W Vt] = SVD(E), and new E' = diag(s,s,0) where s = W(0,0) + W(1,1) / 2

  2. 2-Stage Fundamental Matrix Estimation: Recalculate the fundamental matrix with the RANSAC inliers

These steps should make the Rotation estimation more susceptible to noise.

Aarambh
  • 133
  • 1
  • 6
0

you have to get 4 different solutions and select the one with the most # points having positive Z coordinates. The solution are generated by inverting the sign of the fundamental matrix an substituting w with w_inv which you did not do though you calculated w_inv. Are you reusing somebody else code?

Vlad
  • 4,425
  • 1
  • 30
  • 39