I've currently trouble to understand what's necessary to transform a cv::RotatedRect
after rotating an image without cropping using the following code by Lars Schillingmann in this question.
Here's the code he provided as answer:
#include "opencv2/opencv.hpp"
int main()
{
cv::Mat src = cv::imread("im.png", CV_LOAD_IMAGE_UNCHANGED);
double angle = -45;
// get rotation matrix for rotating the image around its center in pixel coordinates
cv::Point2f center((src.cols-1)/2.0, (src.rows-1)/2.0);
cv::Mat rot = cv::getRotationMatrix2D(center, angle, 1.0);
// determine bounding rectangle, center not relevant
cv::Rect2f bbox = cv::RotatedRect(cv::Point2f(), src.size(), angle).boundingRect2f();
// adjust transformation matrix
rot.at<double>(0,2) += bbox.width/2.0 - src.cols/2.0;
rot.at<double>(1,2) += bbox.height/2.0 - src.rows/2.0;
cv::Mat dst;
cv::warpAffine(src, dst, rot, bbox.size());
cv::imwrite("rotated_im.png", dst);
return 0;
}
In my case, I've a cv::RotatedRect
which matches a certain position in the src
image. This cv::RotatedRect
should match the same postion after the transformation/rotation was applied to the src
mat. Currently, I struggle with doing it the right way.
From what I know, to rotate a cv::RotatedRect
, it's only necessary to directly modify the members of the structure e.g. angle
. I'm quite sure that I only have to modify the center, but the new position is always a bit off from the expected location. I initially expected that I only have to add the difference between bbox
and src
dimensions to get what I'm looking for but it turns out to be not the case (inlcuding the rotation of course).
connected_components[i].center.x += ...
connected_components[i].center.y += ...
cv::RotatedRect newRect(connected_components[i].center, connected_components[i].size, connected_components[i].angle- median);