1

Suppose I have two pointclouds [x1, x2, x3...] and [y1, y2, y3...]. These two pointclouds should be as close as possible. There are a lot of algorithms and deep learning techniques for the pointcloud registration problems. But I have the extra information that: points x1 and y1 should be aligned, x2 and y2 should be aligned, and so on.

So the order of the points in both point clouds is the same. How can I use this to properly get the transformation matrix to align these two-point clouds?

Note: These two points clouds are not exactly the same. Actually, I had ground truth point cloud [x1,x2,x3...] and I tried to reconstruct another point cloud as [y1,y2,y3...]. Now I want to match them and visualize them if the reconstruction is good or not.

Christoph Rackwitz
  • 11,317
  • 4
  • 27
  • 36
Rahat Zaman
  • 1,322
  • 14
  • 32
  • go with ICP, but you already have the matching, so you can skip the assignment step it does. what's left is some math that probably involves a jacobian or something. wikipedia, as usual, isn't helpful for understanding the "estimate a transform" step... this uses `cv.estimateRigidTransform`: https://stackoverflow.com/questions/20120384/iterative-closest-point-icp-implementation-on-python – Christoph Rackwitz Nov 03 '21 at 09:32
  • Does this answer your question? [opencv: Rigid Transformation between two 3D point clouds](https://stackoverflow.com/questions/21206870/opencv-rigid-transformation-between-two-3d-point-clouds) – Christoph Rackwitz Nov 03 '21 at 09:39
  • `cv.estimateAffine3D` seems to do what you need. it assumes matched point sets, as you have. it's also not quite affine, because the description says it only involves translation, rotation, scaling (and even those can be locked down) but who knows... in any case, it's a start – Christoph Rackwitz Nov 03 '21 at 09:43

3 Answers3

2

The problem you are facing is an overdetermined system of equations, which is solvable with a closed-form expression. No need for iterative methods like ICP, since you have the correspondence between points.

If you're looking for a rigid transform (that allows scaling, rotation and translation but no shearing), you want Umeyama's algorithm [3], which is a closed form as well, there is a Python implementation here: https://gist.github.com/nh2/bc4e2981b0e213fefd4aaa33edfb3893

If you are looking for an affine transform between your point clouds, i.e a linear transform A (that allows shearing, see [2]) as well as a translation t (which is not linear):

Then, each of your points must satisfy the equation: y = Ax + t. Here we assume the following shapes: y:(d,n), A:(d,d), x:(d,n), t:(d,1) if each cloud has n points in R^d.

You can also write it in homogeneous notation, by adding an extra coordinate, see [1]. This results in a linear system y=Mx, and a lot (assuming n>d) of pairs (x,y) that satisfy this equation (i.e. an overdetermined system).

You can therefore solve this using a closed-form least square method:

# Inputs:
# - P, a (n,dim) [or (dim,n)] matrix, a point cloud of n points in dim dimension.
# - Q, a (n,dim) [or (dim,n)] matrix, a point cloud of n points in dim dimension.
# P and Q must be of the same shape.
# This function returns :
# - Pt, the P point cloud, transformed to fit to Q
# - (T,t) the affine transform
def affine_registration(P, Q):
    transposed = False
    if P.shape[0] < P.shape[1]:
        transposed = True
        P = P.T
        Q = Q.T
    (n, dim) = P.shape
    # Compute least squares
    p, res, rnk, s = scipy.linalg.lstsq(np.hstack((P, np.ones([n, 1]))), Q)
    # Get translation
    t = p[-1].T
    # Get transform matrix
    T = p[:-1].T
    # Compute transformed pointcloud
    Pt = P@T.T + t
    if transposed: Pt = Pt.T
    return Pt, (T, t)

Opencv has a function called getAffineTransform(), however it only takes 3 pairs of points as input. https://theailearner.com/tag/cv2-getaffinetransform/. This won't be robust for your case (if e.g. you give it the first 3 pairs of points).

References:

[1] https://web.cse.ohio-state.edu/~shen.94/681/Site/Slides_files/transformation_review.pdf#page=24

[2] https://docs.opencv.org/3.4/d4/d61/tutorial_warp_affine.html

[3] https://stackoverflow.com/a/32244818/4195725

matthieu
  • 1,412
  • 1
  • 11
  • 33
1

As another user already mentioned, the ICP algorithm (implementation in PCL can be found here) can be used to register two point clouds to each other. However this only works locally, so the clouds have to be aligned first.

I don't think there is a global registration in PCL at the moment, but I've used OpenGR which has a PCL wrapper.

If you know for sure that x1 is near y1, x2 is near y2 etc. you can do a manual alignment which will be a lot faster than global alignment:

  1. Translate 2nd cloud by vector y1-x1
  2. Rotate vector y2-y1 into vector x2-x1

Then refine it using ICP.

This does not account for measurement errors, so using the matrix estimation above will be useful if your data is not 100% correct.

LorenzBung
  • 404
  • 1
  • 5
  • 15
0

VTK's vtkLandmarkTransform also does the same thing, with support for RigidBody/Similarity/Affine transformation:

// need at least four pairs of points in sourcePoint and targetPoints,
// can pick more, but probably not too many
vtkLandmarkTransform landmarkTransform = new vtkLandmarkTransform();
landmarkTransform.SetSourceLandmarks(sourcePoints);  // source is to be transformed to match the target
landmarkTransform.SetTargetLandmarks(targetPoints);   // target stays still
landmarkTransform.SetMode(VTK_Modes.AFFINE);
landmarkTransform.Modified();  // do the calculation
landmarkTransform.GetMatrix(mtx);

// now you can apply the mtx to all points
Guang
  • 355
  • 1
  • 8