0

Hi i tried to perform feature matching on the two images attached below. The code was not able to match the two images together. The images are slightly different in scale and translation. I attached the code below to bring context to this question. Also is there anyway to optimise the code to increase the amount of matches between the images? Thanks

Edit: added in img3.

the two images img3 img = cv2.imread("file_path") ret, thresh1 = cv2.threshold(img, 220, 255, cv2.THRESH_BINARY)

camelot = cv2.imread("file_path")
ret, camelot = cv2.threshold(camelot, 220, 255, cv2.THRESH_BINARY)
cv2.imshow('camelot', camelot)
cv2.imshow('thresh1', thresh1)

cv2.waitKey(0)
cv2.destroyAllWindows()

# Initiate SIFT detector
orb = cv2.ORB_create()

# find the keypoints and descriptors with SIFT
kp1, des1 = orb.detectAndCompute(thresh1,None)
kp2, des2 = orb.detectAndCompute(camelot,None)

# create BFMatcher object
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)

# Match descriptors.
matches = bf.match(des1,des2)

# Sort them in the order of their distance.
matches = sorted(matches, key = lambda x:x.distance)

# Draw first 10 matches.
img3 = cv2.drawMatches(thresh1,kp1,camelot,kp2,matches ,None, flags=2)

cv2.imshow("img3",img3)
cv2.waitKey(0)
cv2.destroyAllWindows()
  • Can you add the image of matches(img3)? – Garvita Tiwari Jan 22 '20 at 07:55
  • computing/finding/estimating only the homography from unmatched points can be very expensive if you have a lot of points (e.g. all pixels in an image). With sparse points as in your case, it could be possible. For keypoints it is standard to do both: 1. keypoint matching 2. verification with a homography and inlier/outlier computations. With sparse points and untextures images like in your case, most keypoint descriptors (like ORB) will imho fail. It is good to understand how those keypoint detectors and descriptors work to estimate doability for your actual data. – Micka Jan 22 '20 at 08:16
  • a "direct" way could be like this: https://stackoverflow.com/questions/20642641/opencv-templates-in-2d-point-data-set/20975486#20975486 if you know enough about the presence of some of the points. – Micka Jan 22 '20 at 09:17
  • @Micka hi micka i realised what I am trying to do is relatively similar to your post https://stackoverflow.com/questions/20642641/opencv-templates-in-2d-point-data-set/20975486#20975486. But I cant seem to understand the code as i normally code in python only. Could you briefly highlight key things you are trying to do in that code and I will try to implement it in python to test. Thanks in advance! – matthew yap Jan 23 '20 at 07:59
  • 1. find out how many points are needed to describe the pattern size and orientation of your template (e.g. 2 in the example). From those points it must be possible to reconstruct the whole template. E.g. if it is rotated and uniformly resized you'll need 2 points, if perspective transformed you'll need 4 points, etc.. 2. randomly choose that number of points in your target image. Compute the transformation and test, how well your whole template fits to the whole target image (inlier/outlier/percentage/etc.). 3. repeat until you are certain that you found the template or it isnt present – Micka Jan 23 '20 at 08:33
  • if you expect your data to be noisy, e.g. you dont know whether all your template points are available in the target image, even if the template in general is present, you'll have to choose the N points randomly from your template and do this whole process several times. – Micka Jan 23 '20 at 08:35

0 Answers0