0

I am trying to create a stitching algorithm. I have been successful in creating it with a few tweaks needed. The photos below are examples of my stitching program so far. I am able to provide it with an unordered list of image (so long as image is in flight path or side by side it will work regardless of their orientation to one another. img1 then img2 enter image description here

The issue is if the images are reversed some of the image doesn't make it into the final product. Here is the code for the actual stitching. Assume that finding keypoints, matching, and homography is done correctly.

By altering this code is there a way to centre the first image to the destination blank image and still stitch to it. Also, I got this code on stack overflow (Opencv Image Stitching or Panorama ) and am not fully sure how it works and would love if someone could explain it.

Thanks for any help in advance!

Mat stitchMatches(Mat image1,Mat image2, Mat homography){
    Mat result;
    vector<Point2f> fourPoint;
    //-Get the four corners of the first image (master)
    fourPoint.push_back(Point2f (0,0));
    fourPoint.push_back(Point2f (image1.size().width,0));
    fourPoint.push_back(Point2f (0, image1.size().height));
    fourPoint.push_back(Point2f (image1.size().width, image1.size().height));
    Mat destination;
    perspectiveTransform(Mat(fourPoint), destination, homography);

    double min_x, min_y, tam_x, tam_y;
    float min_x1, min_x2, min_y1, min_y2, max_x1, max_x2, max_y1, max_y2;
    min_x1 = min(fourPoint.at(0).x, fourPoint.at(1).x);
    min_x2 = min(fourPoint.at(2).x, fourPoint.at(3).x);
    min_y1 = min(fourPoint.at(0).y, fourPoint.at(1).y);
    min_y2 = min(fourPoint.at(2).y, fourPoint.at(3).y);
    max_x1 = max(fourPoint.at(0).x, fourPoint.at(1).x);
    max_x2 = max(fourPoint.at(2).x, fourPoint.at(3).x);
    max_y1 = max(fourPoint.at(0).y, fourPoint.at(1).y);
    max_y2 = max(fourPoint.at(2).y, fourPoint.at(3).y);
    min_x = min(min_x1, min_x2);
    min_y = min(min_y1, min_y2);
    tam_x = max(max_x1, max_x2);
    tam_y = max(max_y1, max_y2);

    Mat Htr = Mat::eye(3,3,CV_64F);
    if (min_x < 0){
         tam_x = image2.size().width - min_x;
         Htr.at<double>(0,2)= -min_x;
    }
    if (min_y < 0){
        tam_y = image2.size().height - min_y;
        Htr.at<double>(1,2)= -min_y;
    }


    result = Mat(Size(tam_x*2,tam_y*2), CV_32F);
    warpPerspective(image2, result,     Htr, result.size(), INTER_LINEAR, BORDER_CONSTANT,   0);
    warpPerspective(image1, result, (Htr*homography), result.size(), INTER_LINEAR, BORDER_TRANSPARENT,0);
    return result;`
C.Radford
  • 882
  • 4
  • 13
  • 31
  • Off topic: Looks like you've been having fun. Looked into this a while ago to detect oil pipeline leaks with drones before moving on to a different job. No idea if anyone ever built it. – user4581301 Jun 13 '17 at 21:44
  • What do you mean by 'reversed'? – KjMag Jun 13 '17 at 22:59
  • Possible duplicate of [Stitching images using template matching and warpAffine](https://stackoverflow.com/questions/44457064/stitching-images-using-template-matching-and-warpaffine) – alkasm Jun 13 '17 at 23:37
  • I flagged this as a duplicate. Not a bad question at all, but I [just answered a very similar question the other day](https://stackoverflow.com/questions/44457064/stitching-images-using-template-matching-and-warpaffine/44459869?noredirect=1#comment75940151_44459869). Hope that helps you out! – alkasm Jun 13 '17 at 23:37
  • KjMag - by reversed I mean the order that the images are piped into the program. I want it to be able to stitch images together regardless of the order (if left most image is first or rightmost is first). – C.Radford Jun 19 '17 at 15:43
  • @AlexanderReynolds Thanks for the awesome answer again, wanted to run my plan of attack. Step 1: Get four corners of image1 and image2.Step 2: do a perspective transform to both found four corner Mats based on the appropriate homography. Step 3: Determine how far the images will be out of bounds of blank image. Step 4: Pad the blank image based on the size the warped images will take up. Step 5: Determine how much to translate warped images and create a matrix to represent this. Step 6: warpPerspective based on homography and translation matrix. – C.Radford Jul 19 '17 at 15:09
  • Yep that sounds perfect! I know you're using C++ but I have created a Python module to do exactly this. Check out the GitHub [here](https://github.com/alkasm/padded-transformations). Python is fairly intuitive so you should be able to read along for any steps that you might get stuck at. If you create a fully functioning version in C++, it would be great if you created pull request to get it into the repository! – alkasm Jul 19 '17 at 20:05
  • @AlexanderReynolds Thanks! I made a partial solution today, translates just not to exactly the right spot. A cool side affect though is that I no longer need to crop the image as the blank image will always be adjusted (padded) to the right size of the warped images saving me so time which is awesome. I took a look at your modules for inspiration and yea if I get it working I will create a pull request. Even just some simple functions like grabbing the four corners of an image or find the min and max x, and y coordinates between two images that i've made will hopefully help others. – C.Radford Jul 20 '17 at 02:30
  • If you want to troubleshoot it a bit, feel free to [join me in chat](https://chat.stackoverflow.com/rooms/info/149661/padding-homograhpies). – alkasm Jul 20 '17 at 03:10

1 Answers1

1

It's normally easy to center an image; you simply create a bigger matrix padded with zeros (or whatever color you want), and define an ROI in the center with the same size of your image, and place it in there. However, you cannot in general do this with your two images. The problem is that if an image is shifted, or rotated, so that parts of it are outside your destination image bounds, then your returned warped image from warpPerspective is cut off at those bounds. What you need to do is create the padded image, insert the image that is not being warped wherever you like, and modify the transformation (homography, in this case) by adding in the translation to those pixels.

For example, if your centered image has it's top-left point at (400,500) in the padded image, then you need to add a translation of (400, 500) to your homography so the pixels get mapped to the correct space, and as long as your padded image is large enough, none of it will be cut off.

You will need to create a translational homography and compose it with your original homography to add the translation in. For example, suppose your anchor point for the non-warped image inside the padded image is at (x,y). Translation in an homography is given by the last two columns; if your homography is a 3x3 matrix H then (using normal mathematical indexing) H(1,3) is your translation in x and H(2,3) is the translation in y given by your homography. So we need to create a new identity homography H_t and add those translations in:

      1 0 x
H_t = 0 1 y
      0 0 1

Then you can compose this with your original homography H (using matrix multiplication): H_n = H_t * H. Using the new homography H_n we can warp the image into this padded space with that added translation to move it to the correct spot using warpPerspective as usual.

You can also automate this to pad the image precisely as much as it needs, so that you don't have excess padding and the padding will stretch only as needed. See my answer here for a detailed explanation of how to calculate that and warp your images into the padded space.

alkasm
  • 22,094
  • 5
  • 78
  • 94
  • Thanks for these awesome suggestions and resources to check out. The math side of this is a little much for me but getting there. Appreciate the help. – C.Radford Jun 19 '17 at 15:46
  • @C.Radford No problem and it's not *too* much math, just a bit of linear algebra. Even then, all you need to know is homographies take your current pixels and maps them to new locations via matrix multiplication. `warpPerspective` does this, and then the pixel value (the color) gets placed at the new location. All I'm essentially suggesting is place the destination image in the center of a large matrix of zeros and tell the homography how much padding you used on the top and left side, so it knows to use that translation in the homography (on top of the other warping it does). – alkasm Jun 19 '17 at 16:21