0

I have two images, much like the ones below, and a set of matches ((x_1, y_1), (x_2, y_2)) where (x_1, y_1) is a point on the first image and (x_2, y_2) is a point on the second image. I want to overlay the images on top of one another based on the point matches I have so that each matching point lies directly on top of each other. The images may not be oriented at the same angle, so calculating pixel offsets and use PILs paste() function would not work, at least not without more preprocessing. In my case, one image is in color and I can make it more opaque to plot them on top of each other.

from PIL import Image

img1 = Image.open('Rutland.jpg')
img2 = Image.open('Rutland2.jpg')

# matching pixels across the two images
point_matches = [((1988, 1029), (2003, 1045)), ((4091, 3901), (4085, 3896))]

enter image description here

Vy Do
  • 46,709
  • 59
  • 215
  • 313
  • You can try [perspective transform](https://stackoverflow.com/q/14177744/11089932), but that'll require four matching points. And, at best, each of those matching points is located next to one of the corners. – HansHirse Jul 08 '21 at 06:22
  • @ShlomiF - Yes, sorry I was away for a few days! Thank you much. – Collin Giguere Jul 10 '21 at 16:34

1 Answers1

1

Your question includes two separate stages:

  1. Alignment
  2. Overlaying the plotted images.

Let's get #2 out of our way, because using matplotlib's pyplot.imshow you can just plot them out on the same axis and use the alpha input for controlling transparency/opacity, or otherwise use cv2's overlay option -

import cv2

im1_ = im1.copy()
im2_ = im2.copy()
cv2.addWeighted(im1_, 0.5, im2_, 0.5, 0, im2_)
cv2.imshow("Overlay", output)

Regarding alignment (which is the more complex issue by far), you need to "process" one of the images such that it's best matched to the other. This might be one of many optional mappings such as translation, rotation, homographic mapping, and more.

A first point to note is that your matches do not agree with each other as far as only shifts are involved. In other words, if we were to consider only a (\Delta{x},\Delta{y}) global shift, then the first matching estimates a shift of (15, 16), while the second matching estimates a shift of (-6, -5). Depending on the fidelity of the required image-fitting, it might suffice to just average these two estimates and translate the image accordingly.
If that's not enough, then you're implicitly assuming there's some more complex mapping or warping going on, and these mappings generally require more than two points for estimating their parameters. (I think the maximum you can do with two points is "similarity", which includes a combination of translation, rotation, and scale).

On a more general note, maybe you want to consider matching them using opencv functions, for finding keypoints (via SIFT for instance) and/or for finding the actual mapping. See this example tutorial, although there are other, more direct methods that don't require going explicitly through keypoints. This is a good starting point on using ECC (Enhanced Correlation Coefficient).

More specifically, if you have enough points, you can use cv2.findHomography which accepts lists of key-points as inputs.

ShlomiF
  • 2,686
  • 1
  • 14
  • 19
  • You are missing out on the point that he has 2 matching points which is sufficient for finding out the affine transformation between the two images. – Gabend Jul 08 '21 at 08:07
  • 1
    @Gabend - if I'm not mistaken, an affine transformation has 6 params, and requires 3 point-pairs. It's implicitly included where I mention that other (more general) transformations can used but require more points, and that the most one can do with two points is the "similarity" transformation. – ShlomiF Jul 08 '21 at 09:26
  • Thanks for your answer! The set of matching points will be more than two. It will vary, but it's usually at least 6 and I am assuming that there is a more complex mapping happening. Sorry I didn't make that clear beforehand! Is there a library or function that can help me with this type of mapping? – Collin Giguere Jul 08 '21 at 14:19
  • My answer actually already addresses this. But I added a line that makes things slightly more explicit. – ShlomiF Jul 08 '21 at 14:36