I'm trying to detect an image using SURF following the tutorial (https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_feature2d/py_matcher/py_matcher.html)
My goal is now to add multiple images to FlannBasedMatcher and then save it to be able to load it afterward. When changing the code from the example and trying to add()
and train()
the descriptors before calling knnMatch(queryDescriptors=des1, k=2) (instead of matches = flann.knnMatch(des1,des2,k=2)
I get other results as in the tutorial example.
surf = cv2.xfeatures2d.SURF_create(800)
...
FLANN_INDEX_KDTREE = 0
index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
search_params = dict(checks=50) # or pass empty dictionary
...
flann = cv2.FlannBasedMatcher(index_params,search_params)
flann.add(des1)
flann.train()
flann.knnMatch(queryDescriptors=des2, k=2)
Question 1: Why am I getting different results than in the tutorial?
When changing the value of k in knnMatch()
for ex. 6, will return the nearest 6 matches. With knn=2, to find the good matches I check that the distance of the returned matches is not larger than m1.distance < 0.8 * m2.distance
.
Question 2: With knn=6, what match of the 6 should I use as an anchor to compare that the distance is not larger than 0.8*distance away?