0

Theres is a MATLAB example that matches two images and outputs the rotation and scale: https://de.mathworks.com/help/vision/examples/find-image-rotation-and-scale-using-automated-feature-matching.html?requestedDomain=www.mathworks.com

My goal is to recreate this example using C++. I am using the same method of keypoint detection (Harris) and the keypoints seem to be mostly identical to the ones Matlab finds. So far so good.

cv::goodFeaturesToTrack(image_grayscale, corners, number_of_keypoints, 0.01, 5, mask, 3, true, 0.04);
    for (int i = 0; i < corners.size(); i++) {
        keypoints.push_back(cv::KeyPoint(corners[i], 5));
    }

BRISK is used to extract features from the keypoints.

int Threshl = 120;
int Octaves = 8;
float PatternScales = 1.0f;

cv::Ptr<cv::Feature2D> extractor = cv::BRISK::create(Threshl, Octaves, PatternScales);
extractor->compute(image, mykeypoints, descriptors);

These descriptors are then matched using flannbasedmatcher.

cv::FlannBasedMatcher matcher;
matcher.match(descriptors32A, descriptors32B, matches);

Now the problem is that about 80% of my matches are wrong and unusable. For the identical set of images Matlab returns only a couple of matches from which only ~20% are wrong. I have tried sorting the Matches in C++ based on their distance value with no success. The values range between 300 and 700 and even the matches with the lowest distance are almost entirely incorrect.

Now 20% of good matches are enough to calculate the offset but a lot of processing power is wasted on checking wrong matches. What would be a better way to sort the correct matches or is there something obvious I am doing wrong?

EDIT:

I have switched from Harris/BRISK to AKAZE which seems to deliver much better features and matches that can easily be sorted by their distance value. The only downside is the much higher computation time. With two 1000px wide images AKAZE needs half a minute to find the keypoints (on a PC). I reducted this by scaling down the images which makes for an acceptable ~3-5 seconds.

MaxMKA
  • 1
  • 1

1 Answers1

0

The method you are using finds for each point an nearest neighbour no matter how close it is. Two strategies are common: 1. Match set A to set B and set B to A and keep only matches which exist in both matchings. 2. Use 2 knnMatch and perform a ratio check, i.e. keep only the matches where the 1 NN is a lot closer than the 2 NN, e.g. d1 < 0.8 * d2.

The MATLAB code uses SURF. OpenCV also provides SURF, SIFT and AKAZE, try one of these. Especially SURF would be interesting for a comparison.

  • Thanks for your answer! I just tried matching the images from A->B and B->A and comparing the matches. Unfortunately this doesn't seem to be much of an improvement. I have also tried to perform the ratio check described in this answer: http://stackoverflow.com/a/17977207. It looks like there is something wrong with the feature extractor. This example is made to work with SURF: http://docs.opencv.org/2.4/doc/tutorials/features2d/feature_homography/feature_homography.html But if I change it to use BRISK I get the same bad matching. I would really like to avoid the nonfree libraries. – MaxMKA Apr 01 '17 at 22:33
  • Brisk provides a keypoint detection itself, which should work better than the Harris corners computed by goodFeaturesToTrack. I would give it a try instead. Harris is not considering scale. – voidpointer Apr 02 '17 at 05:56
  • Using BRISK to both to find keypoints and extract features didn't improve the results. I tried using AKAZE instead of BRISK and it outputs very good features. Sorting them by their distance makes for almost 100% accurate matches. Only problem is that it takes half a minute to compute. Maybe this can be improved by downsizing the images or changing parameters. But the results are incredibly good compared to BRISK and ORB. – MaxMKA Apr 02 '17 at 12:03