0

In the program given below I am aligning two images using homography and reducing the opacity of im_dst image in im_out image (say opacity=0.5), so that I can see both im_src and im_dst images in im_out image. But all I am getting is a blackened im_dst image in im_out image!

import cv2
import numpy as np
im_src = cv2.imread('src.jpg')
pts_src = np.array([[141, 131], [480, 159], [493, 630],[64, 601]])
im_dst = cv2.imread('dst.jpg')
pts_dst = np.array([[318, 256],[534, 372],[316, 670],[73, 473]])
h, status = cv2.findHomography(pts_src, pts_dst)
img1 = np.array(im_dst , dtype=np.float)
img2 = np.array(im_src , dtype=np.float)
img1 /= 255.0
# pre-multiplication
a_channel = np.ones(img1.shape, dtype=np.float)/2.0
im_dst = img1*a_channel
im_src = img2*(1-a_channel)
im_out = cv2.warpPerspective(im_src, h, (im_dst.shape[1],im_dst.shape[0]))
cv2.imshow("Warped Image", im_out)
cv2.waitKey(0)

I am new to openCV, so I might be missing something simple. Thanks for help!

Ank
  • 1,864
  • 4
  • 31
  • 51
  • looks like something [related](https://stackoverflow.com/a/3375291/5997596) – Azat Ibrakov Jun 05 '17 at 02:25
  • @AzatIbrakov I don't want simple overlaying of images like in Image.alpha_composite() or cv2.addWeighted(). I want to match their homograph too! – Ank Jun 05 '17 at 05:13
  • That's exactly what you want! You just want to use `cv2.addWeighted()` or `.alpha_composite()` on the warped image with the destination image. – alkasm Jun 05 '17 at 23:47
  • @AlexanderReynolds I had no idea those functions could be used that way! Thanks for clearing up!! – Ank Jun 06 '17 at 00:53
  • @Ank to be sure, the reason that you can is because when you use `warpPerspective()`, you're passing in `im_dst.shape` so `im_dst` and `im_out` have the same shape/size; this is the hint that they're both in the same coordinates so they can be plotted together! – alkasm Jun 06 '17 at 01:45

1 Answers1

3

Hey I've seen those points before!

What your code is doing is reducing the values of two images, im_dst and im_src, but then you're simply moving the faded image of im_src to a new position and displaying that. Instead, you should add the faded and warped image to the destination image and output that. The following would be a working modification of the end of your code:

alpha = 0.5
im_dst = img1 * alpha
im_src = img2 * (1-alpha)
im_out = cv2.warpPerspective(im_src, h, (im_dst.shape[1],im_dst.shape[0]))
im_blended = im_dst + im_out
cv2.imshow("Blended Warped Image", im_blended)
cv2.waitKey(0)

However you only divided img1 and not img2 by 255 so you would want to divide both first.


However, there is no reason to do this manually as you have to worry about converting the image types and scaling and all that. Instead, a much easier way is to use the built-in OpenCV function addWeighted() to add two images together with alpha-blending. So your entire code would instead be this short:

import cv2
import numpy as np

im_src = cv2.imread('src.jpg')
pts_src = np.array([[141, 131], [480, 159], [493, 630],[64, 601]])
im_dst = cv2.imread('dst.jpg')
pts_dst = np.array([[318, 256],[534, 372],[316, 670],[73, 473]])

h, status = cv2.findHomography(pts_src, pts_dst)
im_out = cv2.warpPerspective(im_src, h, (im_dst.shape[1],im_dst.shape[0]))

alpha = 0.5
beta = (1.0 - alpha)
dst_warp_blended = cv2.addWeighted(im_dst, alpha, im_out, beta, 0.0)

cv2.imshow('Blended destination and warped image', dst_warp_blended)
cv2.waitKey(0)

The function addWeighted() multiplies the first image im_dst by alpha, and the second image im_out by beta. The last argument is positive shift you can add to the result should you need it. Finally, the result is saturated so that values above whatever is allowable for your datatype is truncated at the maximum. And this way, your result is the same type as your inputs---you don't have to convert to float.


Last point about your code. A lot of tutorials, the one linked above included, use findHomography() to get a homography from four matching points. It is more appropriate to use getPerspectiveTransform() in this case. The function findHomography() finds an optimal homography based on many matching points, using an outlier rejection scheme and random sampling to speed up going through all the possible sets of four matching points. It works fine for sets of four points of course, but it makes more sense to use getPerspectiveTransform() when you have four matching points, and findHomography() when you have more than four. Although, annoyingly, the points you pass into getPerspectiveTransform() have to be of type np.float32 for whatever reason. So this would be my final suggestion for your code:

import cv2
import numpy as np

# Read source image.
im_src = cv2.imread('src.jpg')
# Four corners of the book in source image
pts_src = np.array([[141, 131], [480, 159], [493, 630],[64, 601]], dtype=np.float32)

# Read destination image.
im_dst = cv2.imread('dst.jpg')
# Four corners of the book in destination image.
pts_dst = np.array([[318, 256],[534, 372],[316, 670],[73, 473]], dtype=np.float32)

# Calculate Homography
h = cv2.getPerspectiveTransform(pts_src, pts_dst)

# Warp source image to destination based on homography
warp_src = cv2.warpPerspective(im_src, h, (im_dst.shape[1],im_dst.shape[0]))

# Blend the warped image and the destination image
alpha = 0.5
beta = (1.0 - alpha)
dst_warp_blended = cv2.addWeighted(im_dst, alpha, warp_src, beta, 0.0)

# Show the output
cv2.imshow('Blended destination and warped image', dst_warp_blended)
cv2.waitKey(0)

This (and all the other solutions above) will produce the following image: Warped im_src blended with im_dst

alkasm
  • 22,094
  • 5
  • 78
  • 94
  • The first section (your modification to my existing code) gave me memory error (probably because I am doing a lot of analysis to find the points for findHomography() function), but its giving memory error at that line itself, so I am not sure. Anyway, the last code works just fine so I am good. Thanks for help!! – Ank Jun 06 '17 at 06:40
  • Also, I think you missed this line im_out = cv2.warpPerspective(im_src, h,(im_dst.shape[1],im_dst.shape[0])) in second program – Ank Jun 06 '17 at 06:43
  • @Ank that is possible. I just tested it again and it works fine on my end. And thanks for that catch, totally missed that. I will edit the answer. Glad you got it to work! How are you finding rectangular points of the book for the homography? Just curious. – alkasm Jun 06 '17 at 06:52
  • I have yellow tape pasted on the four corners. Using OpenCV I find yellow color and use the coordinates of these marks (tape) for homography – Ank Jun 06 '17 at 07:36