Thanks in large part to some great answers on stackoverflow (here, here, and here) I've been having some pretty good success in aligning images. There is one issue, though, as you can see below. As I stitch many images together, they get smaller and smaller.
My theory on why this is going on is that the camera wasn't exactly perpendicular to the ground, so as I added more and more images the natural perspective in having a camera not perpendicular to the ground caused the far images to become smaller. This could very well be completely incorrect, though.
However, even when I transform the first image so that it's "as if" it was taken perpendicular to the ground (I think) the distortion still occurs.
Does the brilliant stackoverflow community have any ideas on how I can remedy the situation?
This is the process I use to stitch the images:
- Using knowledge of the corner lat/long points of images, warp such that the first image is perpendicular to ground. The homography I use to do this is the "base" homography
- Find common features between each image and the last one using
goodFeaturesToTrack()
andcalcOpticalFlowPyrLK()
- Use
findHomography()
to find the homography between the two images. Then, compose that homography with all the previous homographies to to get the "net" homography - Apply the transformation and overlay the image with the net result of what I've done so far.
There is one major constraint
The mosaic must be constructed one image at a time, as the camera moves. I am trying to create a real-time map as a drone is flying, fitting each image with the last, one by one.