17

I am developing an application which involves the use of Freak descriptors, just released in the OpenCV2.4.2 version.

In the documentation only two functions appear:

  • The class constructor

  • A confusing method selectPairs()

I want to use my own detector and then call the FREAK descriptor passing the keypoints detected but I don't understand clearly how the class works.

Question:

Do I strictly need to use selectPairs()? Is it enough just by calling FREAK.compute()? I don't really understand which is the use of selectPairs.

Alexis King
  • 43,109
  • 15
  • 131
  • 205
Jav_Rock
  • 22,059
  • 20
  • 123
  • 164

2 Answers2

18

Just flicked through the paper and saw in paragraph 4.2 that the authors set up a method to select the pairs of receptive fields to evaluate in their descriptor, as taking all possible pairs would be too much burden. The selectPairs() function let you recompute this set of pairs.

Read afterwards the documentation where they point exactly to this paragraph in the original article. Also, a few comments in the documentation tells you that there is an already available, offline learned set of pairs that is ready to use with the FREAK descriptor. So I guess at least for a start you could just use the precomputed pairs, and pass as an argument the list of KeyPoints that you obtained from your method to FREAK.compute.

If your results are disapointing, you could try the keypoint selection method used in the original paper (paragraph 2.1), then ultimately learning your own set of pairs.

remi
  • 3,914
  • 1
  • 19
  • 37
  • Yes, I guessed it was something like that. What I don't know is the need to use a detector for keypoints if you have a an algorithm to select pairs. That is not clear enough for me in the paper. I am developing a pyrFAST with freak and I will see what happens. – Jav_Rock Sep 20 '12 at 11:37
  • 1
    My understanding is that, given a keypoint location (e.g. using pyrFAST), the descriptor is computed as the sign of the difference between pairs of points (!= the keypoint) in a small neighbourhood around this keypoint. This is very similar to BRIEF if you are familiar with it. But FREAK has a method which sample the locations in the neighbourhood inspired by the human vision system – remi Sep 20 '12 at 13:32
  • Alright, that makes more sense to me. Yes I know BRIEF (which samples random locations) and BRISK which samples in circles. FREAK samples in circles too but they are redundant and coarse-to-fine like the retina. Which I didn't notice was that this is done in the neighborhood of a keypoint, which has a point. Thanks! – Jav_Rock Sep 20 '12 at 13:51
  • Hi how are they compute the freak patterns? I dont know what is the equation they use to compute the radius of each circle, the sigma for smoothing etc. Does anybody know how to create those patterns (mathematically the equation for radius at each level, the sigma, etc..) – user570593 Nov 10 '12 at 13:07
  • 1
    AFAIK, the equation is not written but is a direct translation of the figure in the reference paper. For training, the detector used by the authors was AGAST, which is similar to a scale-sensitive FAST (like pyrFast). More precisely, AGAST is the detector used in the BRISK paper, you can find the code on the BRISK's paper author webiste. – sansuiso Mar 22 '13 at 13:32
16
#include "iostream"
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include "cv.h"
#include "highgui.h"
#include <opencv2/nonfree/nonfree.hpp>
#include <opencv2/nonfree/features2d.hpp>
#include <opencv2/flann/flann.hpp>
#include <opencv2/legacy/legacy.hpp>
#include <vector>


using namespace cv;
using namespace std;

int main()
{
    Mat image1,image2;
    image1 = imread("C:\\lena.jpg",0);
    image2 = imread("C:\\lena1.bmp",0);

    vector<KeyPoint> keypointsA,keypointsB;
    Mat descriptorsA,descriptorsB;

    std::vector<DMatch> matches;

    OrbFeatureDetector detector(400);

    FREAK extractor;

    BruteForceMatcher<Hamming> matcher;

    detector.detect(image1,keypointsA);
    detector.detect(image2,keypointsB);

    extractor.compute(image1,keypointsA,descriptorsA);
    extractor.compute(image2,keypointsB,descriptorsB);

    matcher.match(descriptorsA, descriptorsB, matches);

    int nofmatches = 30;
    nth_element(matches.begin(),matches.begin()+nofmatches,matches.end());
    matches.erase(matches.begin()+nofmatches+1,matches.end());

    Mat imgMatch;
    drawMatches(image1, keypointsA, image2, keypointsB, matches, imgMatch);

    imshow("matches", imgMatch);
    waitKey(0);

    return 0;
}

this is a simple application to match points in two images...i have used Orb to detect keypoints and FREAK as descriptor on those keypoints...then brutforcematching to detect the corresponding points in two images...i have taken top 30 points that have best match...hope this helps you somewhat...

rotating_image
  • 3,046
  • 4
  • 28
  • 46