Hi i tried to perform feature matching on the two images attached below. The code was not able to match the two images together. The images are slightly different in scale and translation. I attached the code below to bring context to this question. Also is there anyway to optimise the code to increase the amount of matches between the images? Thanks
Edit: added in img3.
img = cv2.imread("file_path")
ret, thresh1 = cv2.threshold(img, 220, 255, cv2.THRESH_BINARY)
camelot = cv2.imread("file_path")
ret, camelot = cv2.threshold(camelot, 220, 255, cv2.THRESH_BINARY)
cv2.imshow('camelot', camelot)
cv2.imshow('thresh1', thresh1)
cv2.waitKey(0)
cv2.destroyAllWindows()
# Initiate SIFT detector
orb = cv2.ORB_create()
# find the keypoints and descriptors with SIFT
kp1, des1 = orb.detectAndCompute(thresh1,None)
kp2, des2 = orb.detectAndCompute(camelot,None)
# create BFMatcher object
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
# Match descriptors.
matches = bf.match(des1,des2)
# Sort them in the order of their distance.
matches = sorted(matches, key = lambda x:x.distance)
# Draw first 10 matches.
img3 = cv2.drawMatches(thresh1,kp1,camelot,kp2,matches ,None, flags=2)
cv2.imshow("img3",img3)
cv2.waitKey(0)
cv2.destroyAllWindows()