1

I'm currently working in an stereo vision project, in which I'm supposed to reconstruct 3D points from correspondences found in each camera view, and for that I'm using OpenCV 2.4.7 for C++.

I was able to correctly calibrate both cameras, compute fundamental matrix, compute re-projection matrix and also rectify images.

My problem lies on the final part of the project, which is compute 3D world coordinates from 2D point correspondences. I already tried using cv::triangulatePoints, but the results where points with coordinates (0, 0, 0), no matter what the input points were. I also tried the linear triangulation algorithm by Hartley & Strum, but that didn't give me good results either.

Could somebody give me a hint on what function I should use? Or maybe some tips on how to correctly implement the ones I've talked about. My biggest problem is to find good documentation on the internet, so that's why I decided to ask here.

Thank you!

  • You might be passing wrong input to the `triangulatePoints` method. Check this other question: http://stackoverflow.com/questions/16295551/how-to-correctly-use-cvtriangulatepoints – Esparver Jun 28 '14 at 15:27

1 Answers1

0

I tried cv::triangulatePoints also and it calculates garbage. I was forced to implement a linear triangulation method manually, which returns the triangulated 3D point given a stereo pixel correspondence:

Mat triangulate_Linear_LS(Mat mat_P_l, Mat mat_P_r, Mat warped_back_l, Mat warped_back_r)
{
    Mat A(4,3,CV_64FC1), b(4,1,CV_64FC1), X(3,1,CV_64FC1), X_homogeneous(4,1,CV_64FC1), W(1,1,CV_64FC1);
    W.at<double>(0,0) = 1.0;
    A.at<double>(0,0) = (warped_back_l.at<double>(0,0)/warped_back_l.at<double>(2,0))*mat_P_l.at<double>(2,0) - mat_P_l.at<double>(0,0);
    A.at<double>(0,1) = (warped_back_l.at<double>(0,0)/warped_back_l.at<double>(2,0))*mat_P_l.at<double>(2,1) - mat_P_l.at<double>(0,1);
    A.at<double>(0,2) = (warped_back_l.at<double>(0,0)/warped_back_l.at<double>(2,0))*mat_P_l.at<double>(2,2) - mat_P_l.at<double>(0,2);
    A.at<double>(1,0) = (warped_back_l.at<double>(1,0)/warped_back_l.at<double>(2,0))*mat_P_l.at<double>(2,0) - mat_P_l.at<double>(1,0);
    A.at<double>(1,1) = (warped_back_l.at<double>(1,0)/warped_back_l.at<double>(2,0))*mat_P_l.at<double>(2,1) - mat_P_l.at<double>(1,1);
    A.at<double>(1,2) = (warped_back_l.at<double>(1,0)/warped_back_l.at<double>(2,0))*mat_P_l.at<double>(2,2) - mat_P_l.at<double>(1,2);
    A.at<double>(2,0) = (warped_back_r.at<double>(0,0)/warped_back_r.at<double>(2,0))*mat_P_r.at<double>(2,0) - mat_P_r.at<double>(0,0);
    A.at<double>(2,1) = (warped_back_r.at<double>(0,0)/warped_back_r.at<double>(2,0))*mat_P_r.at<double>(2,1) - mat_P_r.at<double>(0,1);
    A.at<double>(2,2) = (warped_back_r.at<double>(0,0)/warped_back_r.at<double>(2,0))*mat_P_r.at<double>(2,2) - mat_P_r.at<double>(0,2);
    A.at<double>(3,0) = (warped_back_r.at<double>(1,0)/warped_back_r.at<double>(2,0))*mat_P_r.at<double>(2,0) - mat_P_r.at<double>(1,0);
    A.at<double>(3,1) = (warped_back_r.at<double>(1,0)/warped_back_r.at<double>(2,0))*mat_P_r.at<double>(2,1) - mat_P_r.at<double>(1,1);
    A.at<double>(3,2) = (warped_back_r.at<double>(1,0)/warped_back_r.at<double>(2,0))*mat_P_r.at<double>(2,2) - mat_P_r.at<double>(1,2);
    b.at<double>(0,0) = -((warped_back_l.at<double>(0,0)/warped_back_l.at<double>(2,0))*mat_P_l.at<double>(2,3) - mat_P_l.at<double>(0,3));
    b.at<double>(1,0) = -((warped_back_l.at<double>(1,0)/warped_back_l.at<double>(2,0))*mat_P_l.at<double>(2,3) - mat_P_l.at<double>(1,3));
    b.at<double>(2,0) = -((warped_back_r.at<double>(0,0)/warped_back_r.at<double>(2,0))*mat_P_r.at<double>(2,3) - mat_P_r.at<double>(0,3));
    b.at<double>(3,0) = -((warped_back_r.at<double>(1,0)/warped_back_r.at<double>(2,0))*mat_P_r.at<double>(2,3) - mat_P_r.at<double>(1,3));
    solve(A,b,X,DECOMP_SVD);
    vconcat(X,W,X_homogeneous);
    return X_homogeneous;
}

the input parameters are two 3x4 camera projection matrices and the left/right corresponding homogeneous pixel coordinates.

YuZ
  • 445
  • 5
  • 18
  • do yourself a favour, and use `Mat_` for this kind of adventure, you then can skip the `at` part, and access is e.g. a nice `b(1,0)` . huge gain in readability/clarity – berak Sep 12 '14 at 15:08
  • I'll try it out some day – YuZ Sep 12 '14 at 15:11
  • Would you explain please, why for example in `(warped_back_r.at(1,0)/warped_back_r.at(2,0))*mat_P_r.at(2,3) - mat_P_r.at(1,3)` r(1,0) divided by r(2,0)*P(2,3)? – stackoverflower Sep 28 '18 at 02:13
  • The algorithm is linear triangulation (for reference read chapter 12.2 of free online book "Multiple View Geometry in Computer Vision" by Andrew Zissermann). The idea is to infer X in the equation x=PX with x as pixel coordinates, P as camera matrix and X as space coordinates. Since the left and right term are equal their cross product is zero which can be reshaped as AX=0. The above method solves this equation after creating A from x and P. – YuZ Sep 30 '18 at 19:01