I would like to match a picture with a database which contains more than 2500 pictures at the moment, but I need to find a way to get good results with at least 10k pictures.
I already read a lot of posts on stackoverflow but I couldn't find a proper solution to my problem. I thought about using histograms, but if I understand well, it is useful to find similarities, however I need a 'perfect' match.
I currently have some code working to do the task, but it is too slow (about 6 seconds to find a match with 2500 images)
I'm using ORB detector cv2.ORB()
to find keypoints and descriptors, FlannBasedMatcher and findHomography function with RANSAC as you can see below.
FLANN_INDEX_LSH = 6
flann_params = dict(algorithm = FLANN_INDEX_LSH, table_number = 6, key_size = 12, multi_probe_level = 1)
...
self.matcher = cv2.FlannBasedMatcher(params, {})
...
(_, status) = cv2.findHomography(ptsA, ptsB, cv2.RANSAC, 4.0)
I want to know if there is a better, and more important, a faster way to match with my database, and maybe a different way to store pictures in a database (I'm currently saving keypoints and descriptors).
I hope I was clear enough, if you need more details, post in comments.