I have an image full_image_map
which I have rotated by a certain angle
using the following code:
Mat rotated_full_map_image;
Point2f src_center(full_map_image.cols/2.0F, full_map_image.rows/2.0F);
Mat rot_mat = getRotationMatrix2D(src_center, angle, 1.0);
warpAffine(full_map_image, rotated_full_map_image, rot_mat, full_map_image.size());
I have also found the coordinates of a point in the rotated image using this:
Point rotated_centre;
Point2f p(some_x,some_y);
rotated_centre.x = rot_mat.at<double>(0,0)*p.x + rot_mat.at<double>(0,1)*p.y + rot_mat.at<double>(0,2);
rotated_centre.y = rot_mat.at<double>(1,0)*p.x + rot_mat.at<double>(1,1)*p.y + rot_mat.at<double>(1,2);
However, my original image full_image_map
is much more tall than it is wide. So when I rotate it by say 90 degrees and specify the size of the rotated image to be full_map_image.size()
, most of the rotated image is cut off since it tries to fit it only in that mentioned size. This is similar to this answer.
My question: how can I properly perform this rotation by any given angle and still use the same method to find the coordinates of a point in the rotated image? I am using opencv with c++.