I'm trying to find common over laps between two images and for this I am using a ORB feature detector and BEBLID feature descriptor. Using these features, find the homography between them and align the images. The function code is as follows:
for pair in image_pairs:
img1 = cv2.cvtColor(pair[0], cv2.COLOR_BGR2GRAY)
img2 = cv2.cvtColor(pair[1], cv2.COLOR_BGR2GRAY)
detector = cv2.ORB_create(1000)
kpts1 = detector.detect(img1, None)
kpts2 = detector.detect(img2, None)
descriptor = cv2.xfeatures2d.BEBLID_create(0.75)
kpts1, desc1 = descriptor.compute(img1, kpts1)
kpts2, desc2 = descriptor.compute(img2, kpts2)
method = cv2.DESCRIPTOR_MATCHER_BRUTEFORCE_HAMMING
matcher = cv2.DescriptorMatcher_create(method)
matches = matcher.match(desc1, desc2, None)
matches = sorted(matches, key=lambda x: x.distance)
percentage = 0.2
keep = int(len(matches) * percentage)
matches = matches[:keep]
matchedVis = cv2.drawMatches(self.images[pair[0]], kpts1, self.images[pair[1]], kpts2, matches, None)
cv2.imwrite("feature_match.png", matchedVis)
ptsA = np.zeros((len(matches), 2), dtype="float")
ptsB = np.zeros((len(matches), 2), dtype="float")
for (i, m) in enumerate(matches):
ptsA[i] = kpts1[m.queryIdx].pt
ptsB[i] = kpts2[m.trainIdx].pt
(H, mask) = cv2.findHomography(ptsA, ptsB, method=cv2.RANSAC)
(h, w) = img2.shape[:2]
aligned = cv2.warpPerspective(self.images[pair[0]], H, (w, h))
cv2.imwrite("wrap.png", aligned)
A successful alignment of two images looks like:
And an unsuccessful alignment of two images looks like:
Some of the images in image_pairs
list have no common overlaps and hence the alignment fails. Is there a way to detect such failures or even detect a successful alignment without explicitly looking at the warp.png
images?