23

I have two sets of 3D points (original and reconstructed) and correspondence information about pairs - which point from one set represents the second one. I need to find 3D translation and scaling factor which transforms reconstruct set so the sum of square distances would be least (rotation would be nice too, but points are rotated similarly, so this is not main priority and might be omitted in sake of simplicity and speed). And so my question is - is this solved and available somewhere on the Internet? Personally, I would use least square method, but I don't have much time (and although I'm somewhat good at math, I don't use it often, so it would be better for me to avoid it), so I would like to use other's solution if it exists. I prefer solution in C++, for example using OpenCV, but algorithm alone is good enough.

If there is no such solution, I will calculate it by myself, I don't want to bother you so much.

SOLUTION: (from your answers)
For me it's Kabsch alhorithm;
Base info: http://en.wikipedia.org/wiki/Kabsch_algorithm
General solution: http://nghiaho.com/?page_id=671

STILL NOT SOLVED: I also need scale. Scale values from SVD are not understandable for me; when I need scale about 1-4 for all axises (estimated by me), SVD scale is about [2000, 200, 20], which is not helping at all.

  • 4
    Probably [Kabsch algorithm](http://en.wikipedia.org/wiki/Kabsch_algorithm) is what you need. Difference of two centroids gives translation; and after computing SVD of the covariance matrix, singular values give scaling factors and unitary matrices give optimal rotation matrix. – Evgeny Kluev Nov 17 '12 at 18:04
  • 1
    Evgeny Kluev: thank you very much, it looks like that it is it. I'll try and post results (it will take some time; I have some other things to implement). By the way, luckily for me, OpenCV contains SVD calculator, that simplifies things a lot. –  Nov 17 '12 at 18:24
  • Evgeny Kluev: I deeply apologize for so late reply: I had more important projects. I would like to ask; how should I interpret scaling factors? These numbers are really big (200 - 2000) or small (~0.5) but from my judgment, scale should be about 1-4. And also, scale factors are often different for different axis (for example [2000, 200, 20]). –  Jan 23 '13 at 09:34
  • 1
    Actually there is no way to get scaling factors directly from singular values. My mistake. Sorry. SVD-based algorithm may be applicable here, but I don't know how. In any case, you cold try more general Iterative closest point algorithm. – Evgeny Kluev Jan 23 '13 at 15:24
  • Have you looked at my answer below? You get the scale from Eigen as well https://eigen.tuxfamily.org/dox/group__Geometry__Module.html#gab3f5a82a24490b936f8694cf8fef8e60 of course this assumes you have the correspondences – bendervader May 12 '16 at 05:04
  • @bendervader I apologize, but no, not yet. I failed the project and even though I would like to return to it, I didn't have time for that yet. Life complications ;). –  May 15 '16 at 08:38

8 Answers8

23

Since you are already using Kabsch algorithm, just have a look at Umeyama's paper which extends it to get scale. All you need to do is to get the standard deviation of your points and calculate scale as:

(1/sigma^2)*trace(D*S)

where D is the diagonal matrix in SVD decomposition in the rotation estimation and S is either identity matrix or [1 1 -1] diagonal matrix, depending on the sign of determinant of UV (which Kabsch uses to correct reflections into proper rotations). So if you have [2000, 200, 20], multiply the last element by +-1 (depending on the sign of determinant of UV), sum them and divide by the standard deviation of your points to get scale.

You can recycle the following code, which is using the Eigen library:

typedef Eigen::Matrix<double, 3, 1, Eigen::DontAlign> Vector3d_U; // microsoft's 32-bit compiler can't put Eigen::Vector3d inside a std::vector. for other compilers or for 64-bit, feel free to replace this by Eigen::Vector3d

/**
 *  @brief rigidly aligns two sets of poses
 *
 *  This calculates such a relative pose <tt>R, t</tt>, such that:
 *
 *  @code
 *  _TyVector v_pose = R * r_vertices[i] + t;
 *  double f_error = (r_tar_vertices[i] - v_pose).squaredNorm();
 *  @endcode
 *
 *  The sum of squared errors in <tt>f_error</tt> for each <tt>i</tt> is minimized.
 *
 *  @param[in] r_vertices is a set of vertices to be aligned
 *  @param[in] r_tar_vertices is a set of vertices to align to
 *
 *  @return Returns a relative pose that rigidly aligns the two given sets of poses.
 *
 *  @note This requires the two sets of poses to have the corresponding vertices stored under the same index.
 */
static std::pair<Eigen::Matrix3d, Eigen::Vector3d> t_Align_Points(
    const std::vector<Vector3d_U> &r_vertices, const std::vector<Vector3d_U> &r_tar_vertices)
{
    _ASSERTE(r_tar_vertices.size() == r_vertices.size());
    const size_t n = r_vertices.size();

    Eigen::Vector3d v_center_tar3 = Eigen::Vector3d::Zero(), v_center3 = Eigen::Vector3d::Zero();
    for(size_t i = 0; i < n; ++ i) {
        v_center_tar3 += r_tar_vertices[i];
        v_center3 += r_vertices[i];
    }
    v_center_tar3 /= double(n);
    v_center3 /= double(n);
    // calculate centers of positions, potentially extend to 3D

    double f_sd2_tar = 0, f_sd2 = 0; // only one of those is really needed
    Eigen::Matrix3d t_cov = Eigen::Matrix3d::Zero();
    for(size_t i = 0; i < n; ++ i) {
        Eigen::Vector3d v_vert_i_tar = r_tar_vertices[i] - v_center_tar3;
        Eigen::Vector3d v_vert_i = r_vertices[i] - v_center3;
        // get both vertices

        f_sd2 += v_vert_i.squaredNorm();
        f_sd2_tar += v_vert_i_tar.squaredNorm();
        // accumulate squared standard deviation (only one of those is really needed)

        t_cov.noalias() += v_vert_i * v_vert_i_tar.transpose();
        // accumulate covariance
    }
    // calculate the covariance matrix

    Eigen::JacobiSVD<Eigen::Matrix3d> svd(t_cov, Eigen::ComputeFullU | Eigen::ComputeFullV);
    // calculate the SVD

    Eigen::Matrix3d R = svd.matrixV() * svd.matrixU().transpose();
    // compute the rotation

    double f_det = R.determinant();
    Eigen::Vector3d e(1, 1, (f_det < 0)? -1 : 1);
    // calculate determinant of V*U^T to disambiguate rotation sign

    if(f_det < 0)
        R.noalias() = svd.matrixV() * e.asDiagonal() * svd.matrixU().transpose();
    // recompute the rotation part if the determinant was negative

    R = Eigen::Quaterniond(R).normalized().toRotationMatrix();
    // renormalize the rotation (not needed but gives slightly more orthogonal transformations)

    double f_scale = svd.singularValues().dot(e) / f_sd2_tar;
    double f_inv_scale = svd.singularValues().dot(e) / f_sd2; // only one of those is needed
    // calculate the scale

    R *= f_inv_scale;
    // apply scale

    Eigen::Vector3d t = v_center_tar3 - (R * v_center3); // R needs to contain scale here, otherwise the translation is wrong
    // want to align center with ground truth

    return std::make_pair(R, t); // or put it in a single 4x4 matrix if you like
}
the swine
  • 10,713
  • 7
  • 58
  • 100
  • 3
    Correct answer with a citation to the paper. Haven't tested the code, but the method works correctly and is fully proved in the paper, should get more upvotes. As for OP, I think you get large values in D because you first need to scale the matrix you are decomposing by 1/N, where N is the number of your points. Turns out people often don't do this, as the translation and rotation estimation works correctly even without that. – Laky Sep 15 '15 at 11:09
  • @Laky well, to be honest I don't scale the covariance matrix either (as you can see in the code). The algorithm would probably work with it, but the scale calculation would not. Large numbers in D are not necessarily problematic, as the scale also depends on the standard deviation, which may be large (or small) by itself, irrespective of the scale. Thanks for the praise though. – the swine Sep 20 '15 at 10:55
  • 1
    As a side note, some people prefer [Horn's algorithm](http://www.osapublishing.org/josaa/fulltext.cfm?uri=josaa-4-4-629&id=2711). It is very similar to Kabsch with the exception that it decomposes a 4x4 matrix and the output is a quaternion. To me, it only seems more slightly costly to compute so I stick with Kabsch and convert the 3x3 matrix to a Quaternion afterwards. I'm not quite sure if there is a version of Horn's algorithm which recovers the scale as well. – the swine Sep 20 '15 at 11:01
  • Thanks for the great answer. I've made a 20 lines Python implementation of the full problem [here](https://gist.github.com/nh2/bc4e2981b0e213fefd4aaa33edfb3893#file-rigid-transform-with-scale-py-L20-L39), which I've tried to tune for understandability. – nh2 Sep 10 '16 at 20:42
  • @nh2 sorry, but it gives me NameError: global name 'a1' is not defined at line 34.. Should it be P? – sunrise Aug 27 '19 at 17:37
7

For 3D points the problem is known as the Absolute Orientation problem. A c++ implementation is available from Eigen http://eigen.tuxfamily.org/dox/group__Geometry__Module.html#gab3f5a82a24490b936f8694cf8fef8e60 and paper http://web.stanford.edu/class/cs273/refs/umeyama.pdf

you can use it via opencv by converting the matrices to eigen with cv::cv2eigen() calls.

bendervader
  • 2,619
  • 2
  • 19
  • 22
4

Start with translation of both sets of points. So that their centroid coincides with the origin of the coordinate system. Translation vector is just the difference between these centroids.

Now we have two sets of coordinates represented as matrices P and Q. One set of points may be obtained from other one by applying some linear operator (which performs both scaling and rotation). This operator is represented by 3x3 matrix X:

P * X = Q

To find proper scale/rotation we just need to solve this matrix equation, find X, then decompose it into several matrices, each representing some scaling or rotation.

A simple (but probably not numerically stable) way to solve it is to multiply both parts of the equation to the transposed matrix P (to get rid of non-square matrices), then multiply both parts of the equation to the inverted PT * P:

PT * P * X = PT * Q

X = (PT * P)-1 * PT * Q

Applying Singular value decomposition to matrix X gives two rotation matrices and a matrix with scale factors:

X = U * S * V

Here S is a diagonal matrix with scale factors (one scale for each coordinate), U and V are rotation matrices, one properly rotates the points so that they may be scaled along the coordinate axes, other one rotates them once more to align their orientation to second set of points.


Example (2D points are used for simplicity):

P =  1     2     Q =   7.5391    4.3455
     2     3          12.9796    5.8897
    -2     1          -4.5847    5.3159
    -1    -6         -15.9340  -15.5511

After solving the equation:

X =  3.3417   -1.2573
     2.0987    2.8014

After SVD decomposition:

U = -0.7317   -0.6816
    -0.6816    0.7317

S =  4  0
     0  3

V = -0.9689   -0.2474
    -0.2474    0.9689

Here SVD has properly reconstructed all manipulations I performed on matrix P to get matrix Q: rotate by the angle 0.75, scale X axis by 4, scale Y axis by 3, rotate by the angle -0.25.


If sets of points are scaled uniformly (scale factor is equal by each axis), this procedure may be significantly simplified.

Just use Kabsch algorithm to get translation/rotation values. Then perform these translation and rotation (centroids should coincide with the origin of the coordinate system). Then for each pair of points (and for each coordinate) estimate Linear regression. Linear regression coefficient is exactly the scale factor.

Evgeny Kluev
  • 24,287
  • 7
  • 55
  • 98
  • How did you determine the rotations from the U and V matrices? Are the rotations in radians? – blueshift May 03 '16 at 18:49
  • @blueshift: in 2D case it is trivial: matrix components are just sin and cos of the angle. in 3D case: rotation axis is determined by eigenvector, then you find the angle between any vector perpendicular to the axis and the same vector transformed by the rotation matrix. See details in [wikipedia](https://en.wikipedia.org/wiki/Rotation_matrix). – Evgeny Kluev May 03 '16 at 20:00
  • Thanks, Evgeny, especially for the Wikipedia link. In your example, how did you determine whether to use -0.7317 or 0.7317 for the cosine? – blueshift May 04 '16 at 15:51
  • @blueshift: Honestly, I don't know. The problem is that some rows of the matrix from SVD are negated, and I don't know which. The only way to use these data (without digging deeper into the problem) is to try both possibilities, apply the rotation to actual points and see which one is correct. – Evgeny Kluev May 04 '16 at 16:46
1

A good explanation Finding optimal rotation and translation between corresponding 3D points

The code is in matlab but it's trivial to convert to opengl using the cv::SVD function

Martin Beckett
  • 94,801
  • 28
  • 188
  • 263
  • I deeply apologize for so late reply; I had more important projects. Thank you very much for the link, it helped me. I implemented it with cv::SVD and it works just fine. But I still have one question: what about scale? The scale values from SVD are too big or small. –  Jan 23 '13 at 09:28
  • That approach will only work with equal sized clouds. Normally you know the clouds are the same scale, I haven't tried it but there is a scale approach http://www.google.com/url?sa=t&rct=j&q=optimal%20rotation%20point%20cluds%20with%20scale&source=web&cd=4&cad=rja&ved=0CEsQFjAD&url=http%3A%2F%2Fmi.eng.cam.ac.uk%2F~cipolla%2Fpublications%2FcontributionToEditedBook%2F2012-ICVSS-Toshiba.pdf&ei=yO7_UIT1LajyigLcp4CoCQ&usg=AFQjCNE-bHBZgquyyLbYVo-xavM4sWcTRA&bvm=bv.41248874,d.cGE – Martin Beckett Jan 23 '13 at 14:07
1

You might want to try ICP (Iterative closest point). Given two sets of 3d points, it will tell you the transformation (rotation + translation) to go from the first set to the second one. If you're interested in a c++ lightweight implementation, try libicp.

Good luck!

Nacho
  • 89
  • 7
  • Thank you for your answer (and apologizes for late reply). I used Kabsch algorithm instead. Also, I need a scale value. –  Jan 23 '13 at 09:44
  • 5
    ICP is unnecessarily too complicated and slow if the correspondences between the points are known. For Kabsch you already know the correspondences between the points. ICP can match any two point sets (different numbers of points, only partially overlapping shapes, ...) but of course at much higher price and with the risk that it will not converge to the best solution. – the swine Aug 27 '15 at 09:40
0

The general transformation, as well the scale can be retrieved via Procrustes Analysis. It works by superimposing the objects on top of each other and tries to estimate the transformation from that setting. It has been used in the context of ICP, many times. In fact, your preference, Kabash algorithm is a special case of this.

Moreover, Horn's alignment algorithm (based on quaternions) also finds a very good solution, while being quite efficient. A Matlab implementation is also available.

Tolga Birdal
  • 491
  • 6
  • 15
  • Note: I'm monitoring this, however I'm really busy with other work, so it'll take time before I have time to return to this project. Anyway, thanks for answer to such old question. –  Apr 20 '15 at 17:28
0

Scale can be inferred without SVD, if your points are uniformly scaled in all directions (I could not make sense of SVD-s scale matrix either). Here is how I solved the same problem:

  1. Measure distances of each point to other points in the point cloud to get a 2d table of distances, where entry at (i,j) is norm(point_i-point_j). Do the same thing for the other point cloud, so you get two tables -- one for original and the other for reconstructed points.

  2. Divide all values in one table by the corresponding values in the other table. Because the points correspond to each other, the distances do too. Ideally, the resulting table has all values being equal to each other, and this is the scale.

  3. The median value of the divisions should be pretty close to the scale you are looking for. The mean value is also close, but I chose median just to exclude outliers.

Now you can use the scale value to scale all the reconstructed points and then proceed to estimating the rotation.

Tip: If there are too many points in the point clouds to find distances between all of them, then a smaller subset of distances will work, too, as long as it is the same subset for both point clouds. Ideally, just one distance pair would work if there is no measurement noise, e.g when one point cloud is directly derived from the other by just rotating it.

Rasmus
  • 51
  • 1
  • 3
-2

you can also use ScaleRatio ICP proposed by BaoweiLin The code can be found in github