21

Hi have seen a lot of tutorials how to do simple image stitching using two photos and that is no problem.
But what to do when I want to make a panorama from 4-6 images or more?

I have code that takes in list of image files(the images are in order from the first image in the sequence to the last). Then for each image I compute the SIFT feature descriptors . But then I am stuck, for two images I would set up a matcher using FLANN kd-tree and find matches between the images and calculate the Homography. Similar to this tutorial http://docs.opencv.org/trunk/doc/py_tutorials/py_feature2d/py_feature_homography/py_feature_homography.html#py-feature-homography

But in stead of showing the lines between feature points at the end I have used this https://stackoverflow.com/a/20355545/622194 function to make a panorama from 2 images. But I am not sure what to do when I want to add the third and the fourth image to the panorama.

EDIT:

From the answers I have tried to implement my image stitching script to calculate a homography matrix between images that are next to each other in the image sequence. So if I have I1 I2 I3 and I4 I now have H_12, H_23 and H_34. Then I start by stitching I1 and I2 using H_12. Then I want to find cumulative homography to stitch I3 to the current panorama. I fing H_13 = H_12*H_23 and stitch the image 3 to the current panorama but here I get very apparent gap in my panorama image and when next image is stitched it is even bigger gap and the images is very stretched.

Can anyone tell me if I am using right approach for this or can someone spot the error or see what I am doing wrong.

3 Answers3

16

Step by step, assuming you want to stitch four images I0, I1, I2, I3, your goal is to compute homographies H_0, H_1, H_2, H_3;

  1. Compute all pairwise homographies H_01, H_02, H_03, H_12, H_13, H_23 where homography H_01 warps image I0 into I1, etc...
  2. Select one anchor image e.g. I1 which position will remain fixed i.e H_1 = Identity
  3. Find image that better align with I1 based on maximum number of consistent matches e.g. I3
  4. Update H_3 = H_1 * inv(H_13) = inv(H_13) = H_31
  5. Find image that better matches I1 or I3 e.g I2 matching I3
  6. Update H_2 = H_3 * H_23
  7. Same as above for image I0
  8. Do bundle adjustment to globally optimize alignment

See section 4 of this seminal paper Automatic Panoramic Image Stitching using Invariant Features for an in depth explanation.

Burak
  • 2,251
  • 1
  • 16
  • 33
memecs
  • 7,196
  • 7
  • 34
  • 49
  • Will same approach work if I try to do 360° panorama? Where I know the order of the images img0,img1...imgN. Is then enough to store the homography between img0-img1, img1-2 and imgN-1-imgN then to find the to find the transform between img0-img1 I use H_01 and for (img0-img1)-img2 I would us H_02 = H_01*H_12 ? Would that work? I sense that your approach assumes that I do not know that images should match together. If I know the image sequence I believe that it is useless to store the Homography between img0 and img5 as they match poorly. –  Jul 04 '14 at 12:47
  • Yes it will work for 360° panoramas. Bundle adjustment will be essential to correct the global alignment and wrap the panorama correctly. Sure, if you are creating an algorithm for a particular panorama and you know some of the images match poorly you can avoid to compute homographies between them – memecs Jul 07 '14 at 12:32
  • 3
    @memecs The link to the paper is dead. – rex123 Sep 25 '17 at 20:47
  • Can you detail a bit Do bundle adjustment to globally optimize alignment? – Philippe Remy Sep 14 '20 at 06:32
5

Hacky approach

The easiest way (though not super efficient) given the functions you've written, is to just grow the panorama image by stitching it with each successive image. Something like this pseudocode:

panorama = images[0]
for i in 1:len(images)-1
    panorama = stitch(panorama,images[i])

This method basically attempts to match the next image to any part of the current panorama. It should work decently well, assuming each new image is somewhere on the border of the current panorama, and there isn't too much perspective distortion.

Mathematical approach

The other option, if you know the order that you want to stitch, is to find the Homography from one image to the next, and then multiply them. The result is the Homography from that image to image 0.

For example: the H that transforms image 3 to line up with image 0 is H_03 = H_01 * H_12 * H_23. Where H_01 is the H that transforms image 1 to line up with image 0. (Depending on the way their code defines H, you might need to reverse the above multiplication order.) So you would multiply to obtain H_0i and then use it to transform image i to line up with image 0.

For background on why you multiply the transformations, see: Transformations and Matrix Multiplication specifically the "Composition of tranformations" part.

Delgan
  • 18,571
  • 11
  • 90
  • 141
Luke
  • 5,329
  • 2
  • 29
  • 34
  • The problem with the hacky approach is that it will be difficult to reliably estimate matching features once images are warped. – memecs Jul 03 '14 at 23:19
  • @memecs - absolutely right. (That's what I meant by perspective distortion.) – Luke Jul 03 '14 at 23:23
  • Ok, Not sure if I am doing this correct but here is my code: http://pastebin.com/HLAnF62p But this does not work. The first two images are correctly stitched but when I add the third on I just get something weird, the window becomes very large and I only see part of the first two images stitched together and nothing of the third image. This is close but I just need to find out how to get correct transformation of the third+ images. If I multiply by identity matrix the images just are drawn on top of each other with small translation. –  Jul 04 '14 at 18:46
2

I had the similar problem with gaps between images. The first thing you should do is to init your accumulated homography matrix to identity at first frame. Then, with every new frame you should multiply it by homography matrix between current and next frame. Be aware to use numpy matrices and not numpy arrays. IDK why but they have different multiplication routines.

Here is my code:

def addFramePair(self, images, ratio=0.75, reprojThresh=4.0, showMatches=False):        
    (imageA, imageB) = images
    (kpsA, featuresA) = self.detectAndDescribe(imageA)
    (kpsB, featuresB) = self.detectAndDescribe(imageB)

    H = self.matchKeypoints(kpsA, kpsB, featuresA, featuresB, ratio, reprojThresh)
    self.accHomography *= np.asmatrix(H)
    result = cv2.warpPerspective(imageA, np.linalg.inv(self.accHomography), (1600, 900))
    return result

imageA is current, imageB is the next one.

Hope this helps.