12

How do I retrieve the rotation matrix, the translation vector and maybe some scaling factors of each camera using OpenCV when I have pictures of an object from the view of each of these cameras? For every picture I have the image coordinates of several feature points. Not all feature points are visible in all of the pictures. I want to map the computed 3D coordinates of the feature points of the object to a slightly different object to align the shape of the second object to the first object.

I heard it is possible using cv::calibrateCamera(...) but I can't get quite through it...

Does someone have experiences with that kind of problem?

Martin Hennig
  • 551
  • 1
  • 6
  • 23
  • It is not clear above if you know the 3d world coordinates of the points that you observe in the different images. If this is the case, this is a Perspective-n-point-problem and you can calibrate the parameters of each camera using the EPnP algorithm available here: http://cvlab.epfl.ch/software/EPnP/index.php. Otherwise, see my answer below. – Rulle Nov 19 '11 at 12:42
  • 1
    The 3d coordinates of the photographed object are unknown. – Martin Hennig Nov 20 '11 at 15:36
  • If the 3d world coordinates of points on the object are _unknown_, I don't think cv::calibrateCamera will work, because it seems to assume that the object points are _known_. – Rulle Nov 20 '11 at 17:42
  • You might want to look at bundle adjustments too: http://en.wikipedia.org/wiki/Bundle_adjustment. This assumes that you have an _initial estimate_ of the camera poses. The problem is then to reconstruct all points and the poses. – Rulle Nov 20 '11 at 18:07
  • This seems to be exactly what I needed! Thank you for your lasting attention and your time! – Martin Hennig Nov 20 '11 at 19:49

2 Answers2

12

I was confronted with the same problem as you, in OpenCV. I had a stereo image pair and I wanted to computed the external parameters of the cameras and the world coordinates of all observed points. This problem has been treated here:

Berthold K. P. Horn. Relative orientation revisited. Berthold K. P. Horn. Artificial Intelligence Laboratory, Massachusetts Institute of Technology, 545 Technology ...

http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.64.4700

However, I wasn't able to find a suitable implementation of this problem (perhaps you will find one). Due to time limitations I did not have time to understand all the maths in this paper and implement it myself, so I came up with a quick-and-dirty solution that works for me. I will explain what I did to solve it:

Assuming we have two cameras, where the first camera has external parameters RT = Matx::eye(). Now make a guess about the the rotation R of the second camera. For every pair of image points observed in both images, we compute the directions of their corresponding rays in world coordinates and store them in a 2d-array dirs (EDIT: The internal camera parameters are assumed to be known). We can do this since we assume that we know the orientation of every camera. Now we build an overdetermined linear system AC = 0 where C is the centre of the second camera. I provide you with the function to compute A:

Mat buildA(Matx<double, 3, 3> &R, Array<Vec3d, 2> dirs)
{
    CV_Assert(dirs.size(0) == 2);
    int pointCount = dirs.size(1);
    Mat A(pointCount, 3, DataType<double>::type);
    Vec3d *a = (Vec3d *)A.data;
    for (int i = 0; i < pointCount; i++)
    {
        a[i] = dirs(0, i).cross(toVec(R*dirs(1, i)));
        double length = norm(a[i]);
        if (length == 0.0)
        {
            CV_Assert(false);
        }
        else
        {
            a[i] *= (1.0/length);
        }
    }
    return A;
}

Then calling cv::SVD::solveZ(A) will give you the least-squares solution of norm 1 to this system. This way, you obtain the rotation and translation of the second camera. However, since I just made a guess about the rotation of the second camera, I make several guesses about its rotation (parameterized using a 3x1 vector omega from which i compute the rotation matrix using cv::Rodrigues) and then I refine this guess by solving the system AC = 0 repetedly in a Levenberg-Marquardt optimizer with numeric jacobian. It works for me but it is a bit dirty, so you if you have time, I encourage you to implement what is explained in the paper.

EDIT:

Here is the routine in the Levenberg-Marquardt optimizer for evaluating the vector of residues:

void Stereo::eval(Mat &X, Mat &residues, Mat &weights)
{

        Matx<double, 3, 3> R2Ref = getRot(X); // Map the 3x1 euler angle to a rotation matrix
        Mat A = buildA(R2Ref, _dirs); // Compute the A matrix that measures the distance between ray pairs
        Vec3d c;
        Mat cMat(c, false);
        SVD::solveZ(A, cMat); // Find the optimum camera centre of the second camera at distance 1 from the first camera
        residues = A*cMat; // Compute the  output vector whose length we are minimizing
    weights.setTo(1.0);
}

By the way, I searched a little more on the internet and found some other code that could be useful for computing the relative orientation between cameras. I haven't tried any code yet, but it seems useful:

http://www9.in.tum.de/praktika/ppbv.WS02/doc/html/reference/cpp/toc_tools_stereo.html

http://lear.inrialpes.fr/people/triggs/src/

http://www.maths.lth.se/vision/downloads/

Rulle
  • 4,496
  • 1
  • 15
  • 21
  • Thank you very much for your answer. I think I understand what you wrote and coded and it will help me a great deal to find a possible solution for my problem. However I believe that `cv::calibrateCamera(...)` works quite similar to what you proposed. They describe the algorithm as following: – Martin Hennig Nov 20 '11 at 15:27
  • 1. First, it computes the initial intrinsic parameters (the option only available for planar calibration patterns) or reads them from the input parameters. The distortion coefficients are all set to zeros initially (unless some of CV_CALIB_FIX_K? are specified). 2. The initial camera pose is estimated as if the intrinsic parameters have been already known. This is done using FindExtrinsicCameraParams2 – Martin Hennig Nov 20 '11 at 15:30
  • 3. After that the global Levenberg-Marquardt optimization algorithm is run to minimize the reprojection error, i.e. the total sum of squared distances between the observed feature points imagePoints and the projected (using the current estimates for camera parameters and the poses) object points objectPoints ; see ProjectPoints2 . – Martin Hennig Nov 20 '11 at 15:31
  • This is the link to the documentation of `cv::calibrateCamera`: http://opencv.willowgarage.com/documentation/cpp/camera_calibration_and_3d_reconstruction.html – Martin Hennig Nov 20 '11 at 15:37
  • The above solution assumes that the 3d world coordinates are not known. If they are known, cv::calibrateCamera will probably work fine. – Rulle Nov 20 '11 at 17:37
  • @Jonas Östlund could you illustrate your code with a little example, please? – SecStone Nov 23 '11 at 19:18
  • For the moment, I don't have any example at hand that would easily illustrate it. But the idea is pretty simple: Every row in the **A** matrix measures the distance between a pair of rays from the two cameras and the optimizer finds the orientation of one camera with respect to the other camera such that the distance between all ray pairs is minimized. – Rulle Nov 23 '11 at 21:12
  • @JonasÖstlund:Can you explain the Levenberg Marquardt algorithm in a little more detailed way... – Soumajyoti Sarkar Dec 31 '12 at 06:56
2

Are these static cameras which you wish to calibrate for future use as a stereo pair? In this case you would want to use the cv::stereoCalibrate() function. OpenCV contains some sample code, one of which is stereo_calib.cpp which may be worth investigating.

Chris
  • 8,030
  • 4
  • 37
  • 56
  • Thanks for your answer, but I have no interest in using a stereo pair. I have one camera, that is used to view the same object from different sides. Then the feature points marked in every picture should help me computing new positions for correspondent points of a virtual object that is to be deformed slightly. The virtual object should then have the exact geometry as the real object which was photographed. BTW `cv::calibrateCamera(...)` uses `cv::stereoCalibrate()` in the computation progress. – Martin Hennig Nov 18 '11 at 15:16
  • Ok, is your single camera calibrated (ie do you know the intrinsic parameters - focal length, pixel skew, principle point, distortion coefficients)? – Chris Nov 18 '11 at 16:01
  • I could estimate them but the real values are unknown. – Martin Hennig Nov 18 '11 at 17:51