From the OpenCV documentation:
C++:void SIFT::operator()(InputArray img, InputArray mask, vector<KeyPoint>& keypoints,
OutputArray descriptors, bool useProvidedKeypoints=false)
Parameters:
img – Input 8-bit grayscale image
mask – Optional input mask that marks the regions where we should detect features.
keypoints – The input/output vector of keypoints
descriptors – The output matrix of descriptors. Pass cv::noArray()
if you do not need them.
useProvidedKeypoints – Boolean flag. If it is true, the keypoint
detector is not run. Instead, the provided vector of keypoints is
used and the algorithm just computes their descriptors.
I have the following questions:
What values do
mask
takes on? I mean, if i wanted to remove the keypoints near the border of the image i should give a mask with zeros in the borders and ones in the center?On another webpage, I found a different method, which uses a method "detect" to detect the keypoints and a method "compute" to compute the descriptors. What is the difference between using the functions detect/compute versus the function "operator"? With the first method i first detect the keypoints without compute the descriptors... but, instead, if i used the method "operator" with
useProvidedKeypoints
flag, how do I have to compute the keypoints before?Moreover, what's the difference between the brute force matching and FLANN matching in terms of number of matched points? I need to have the same results obtained using the
VL_FEAT
library of MATLAB... so i want to know which of the two method is the closer
for example the following Matlab code give me a number of 2546 detected keypoints
[f1,d1] = vl_sift(frame1_gray);
Using OpenCV:
std::vector<KeyPoint> keypoints;
cv::SiftFeatureDetector detector;
detector.detect(gray1, keypoints);
cout << keypoints.size() << endl;
just 708!!!
then, using SIFT::operator() there is something wrong in the parameters which i give as input
std::vector<KeyPoint> keypoints;
Mat descriptors;
SIFT S = SIFT();
SIFT::operator(gray1, Mat(), keypoints, descriptors);