2

I know in OpenCV we can get the affine transformation given two sets of points by getAffineTransform().

But getRotationMatrix2D() only supports pre-computed angel and scale.

How can I compute the similarity transformation matrix given two sets of points?

dontloo
  • 10,067
  • 4
  • 29
  • 50

2 Answers2

2

There is cv::estimateRigidTransform. You can choose between a full affine transform, which has 6 degrees of freedom (rotation, translation, scaling, shearing) or a partial affine (rotation, translation, uniform scaling), which has 5 degrees of freedom.

You can compute the similarity transform by two vector<Point> p1 and p2 with the code from this answer:

cv::Mat R = cv::estimateRigidTransform(p1,p2,false);

// extend rigid transformation to use perspectiveTransform:
cv::Mat H = cv::Mat(3,3,R.type());
H.at<double>(0,0) = R.at<double>(0,0);
H.at<double>(0,1) = R.at<double>(0,1);
H.at<double>(0,2) = R.at<double>(0,2);

H.at<double>(1,0) = R.at<double>(1,0);
H.at<double>(1,1) = R.at<double>(1,1);
H.at<double>(1,2) = R.at<double>(1,2);

H.at<double>(2,0) = 0.0;
H.at<double>(2,1) = 0.0;
H.at<double>(2,2) = 1.0;

// compute perspectiveTransform on p1
std::vector<cv::Point2f> result;
cv::perspectiveTransform(p1,result,H)

//warp image with transform
cv::Mat warped;
cv::warpPerspective(src,warped,H,src.size());

I didnt try it, but refering to the answer it should work fine.

Community
  • 1
  • 1
PSchn
  • 718
  • 4
  • 14
  • 1
    A little paradoxical that they chose to call it rigid, when the 6 and 4 DOF transforms are precisely not rigid. –  Sep 26 '16 at 17:16
  • thank you much for answering, why does it have 5 degrees of freedom instead of 4? according to [their formula](http://docs.opencv.org/2.4/_images/math/0a22facbc11cdd0f9b8d4658e0c145da2cb8730b.png) there're only 4 parameters – dontloo Sep 27 '16 at 06:10
  • Yeah im confused to. I know there is theoreticly a 5th parameter e for orientation. But there calling it rigid, so e should be equal 1 (oriantation preserving). – PSchn Sep 27 '16 at 08:34
2

Somehow there're some problems working with cv::estimateRigidTransform with the opencv version I'm using, so I wrote a function that only works with two points (it's sufficient for me and I believe it'll be faster).

cv::Mat getSimilarityTransform(const cv::Point2f src[], const cv::Point2f dst[])
{
    double src_d_y = src[0].y - src[1].y;
    double src_d_x = src[0].x - src[1].x;
    double src_dis = sqrt(pow(src_d_y, 2) + pow(src_d_x, 2));

    double dst_d_y = dst[0].y - dst[1].y;
    double dst_d_x = dst[0].x - dst[1].x;
    double dst_dis = sqrt(pow(dst_d_y, 2) + pow(dst_d_x, 2));

    double scale = dst_dis / src_dis;
    // angle between two line segments
    // ref: http://stackoverflow.com/questions/3365171/calculating-the-angle-between-two-lines-without-having-to-calculate-the-slope
    double angle = atan2(src_d_y, src_d_x) - atan2(dst_d_y, dst_d_x);

    double alpha = cos(angle)*scale;
    double beta = sin(angle)*scale;

    cv::Mat M(2, 3, CV_64F);
    double* m = M.ptr<double>();

    m[0] = alpha;
    m[1] = beta;
    // tx = x' -alpha*x  -beta*y
    // average of two points
    m[2] = (dst[0].x - alpha*src[0].x - beta*src[0].y + dst[1].x - alpha*src[1].x - beta*src[1].y)/2;
    m[3] = -beta;
    m[4] = alpha;
    // ty = y' +beta*x  -alpha*y
    // average of two points
    m[5] = (dst[0].y + beta*src[0].x - alpha*src[0].y + dst[1].y + beta*src[1].x - alpha*src[1].y)/2;

    return M;
}

Some results (picture from the LFW data set)
enter image description here enter image description here enter image description here enter image description here

dontloo
  • 10,067
  • 4
  • 29
  • 50