Please can somebody show me sample code or tell me how to use this class and methods. I just want to match SURF's from a query image to those with an image set by applying Flann. I have seen many image match code in the samples but what still eludes me is a metric to quantify how similar an image is to other. Any help will be much appreciated.
Asked
Active
Viewed 1.9k times
2 Answers
10
Here's untested sample code
using namespace std;
using namespace cv;
Mat query; //the query image
vector<Mat> images; //set of images in your db
/* ... get the images from somewhere ... */
vector<vector<KeyPoint> > dbKeypoints;
vector<Mat> dbDescriptors;
vector<KeyPoint> queryKeypoints;
Mat queryDescriptors;
/* ... Extract the descriptors ... */
FlannBasedMatcher flannmatcher;
//train with descriptors from your db
flannmatcher.add(dbDescriptors);
flannmatcher.train();
vector<DMatch > matches;
flannmatcher.match(queryDescriptors, matches);
/* for kk=0 to matches.size()
the best match for queryKeypoints[matches[kk].queryIdx].pt
is dbKeypoints[matches[kk].imgIdx][matches[kk].trainIdx].pt
*/
Finding the most 'similar' image to the query image depends on your application. Perhaps the number of matched keypoints is adequate. Or you may need a more complex measure of similarity.

Sammy
- 356
- 2
- 7
-
Thanks for the reply , "the best match for queryKeypoints[matches[kk].queryIdx].pt is dbKeypoints[matches[kk].imgIdx][matches[kk].trainIdx].pt" How to do this part, how to determine the best match, any algorithm to be implemented or method in opencv. – AquaAsh Apr 18 '11 at 07:28
-
The function call flannmatcher.match(queryDescriptors, matches); does the matching. All you have to do is use the indices in the vector matches. – Sammy Apr 18 '11 at 12:21
-
Sorry for the late reply, thanks, I finally understood the index thing. Anyway, I am trying to reduce false positives can you suggest any complex measures of similarity. – AquaAsh Apr 25 '11 at 13:32
-
@Sammy: With this is possible to calc an homography from different training images with different sizes? I mean when you have to do `perspectiveTransform(` which corners of which training image will you pass? – dynamic Apr 23 '13 at 15:54
1
To reduce the number of false positives, you can compare the first most nearest neighbor to the second most nearest neighbor by taking the ratio of there distances. distance(query,mostnearestneighbor)/distance(query,secondnearestneighbor) < T, the smaller the ratio is, the higher the distance of the second nearest neighbor to the query descriptor. This thus is a translation of high distinctiveness. Used in many computer vision papers that envision registration.

filipsch
- 195
- 1
- 10