Does someone know the link of example of SIFT implementation with OpenCV 2.2. regards,
6 Answers
Below is a minimal example:
#include <opencv/cv.h>
#include <opencv/highgui.h>
int main(int argc, const char* argv[])
{
const cv::Mat input = cv::imread("input.jpg", 0); //Load as grayscale
cv::SiftFeatureDetector detector;
std::vector<cv::KeyPoint> keypoints;
detector.detect(input, keypoints);
// Add results to image and save.
cv::Mat output;
cv::drawKeypoints(input, keypoints, output);
cv::imwrite("sift_result.jpg", output);
return 0;
}
Tested on OpenCV 2.3

- 15,037
- 12
- 64
- 93
-
1How about matching two images? – maximus Nov 11 '11 at 08:12
-
5Have a look at `OpenCV2.3.1/samples/cpp/matcher_simple.cpp` https://code.ros.org/trac/opencv/browser/trunk/opencv/samples/cpp/matcher_simple.cpp You need a `DescriptorMatcher` (like `BruteForceMatcher`) more documentation on those can be found here: http://opencv.itseez.com/modules/features2d/doc/common_interfaces_of_descriptor_matchers.html – Unapiedra Nov 11 '11 at 12:16
-
6In order to compile your code with OpenCV 2.4.4 I need to add `#include
` – Alessandro Jacopson Mar 16 '13 at 15:41
You can obtain the SIFT detector and SIFT-based extractor in several ways. As others have already suggested the more direct methods, I will provide a more "software engineering" approach that may make you code more flexible to changes (i.e. easier to change to other detectors and extractors).
Firstly, if you are looking to obtain the detector using built in parameters the best way is to use OpenCV"s factory methods for creating it. Here's how:
#include <opencv2/core/core.hpp>
#include <opencv2/features2d/features2d.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <vector>
using namespace std;
using namespace cv;
int main(int argc, char *argv[])
{
Mat image = imread("TestImage.jpg");
// Create smart pointer for SIFT feature detector.
Ptr<FeatureDetector> featureDetector = FeatureDetector::create("SIFT");
vector<KeyPoint> keypoints;
// Detect the keypoints
featureDetector->detect(image, keypoints); // NOTE: featureDetector is a pointer hence the '->'.
//Similarly, we create a smart pointer to the SIFT extractor.
Ptr<DescriptorExtractor> featureExtractor = DescriptorExtractor::create("SIFT");
// Compute the 128 dimension SIFT descriptor at each keypoint.
// Each row in "descriptors" correspond to the SIFT descriptor for each keypoint
Mat descriptors;
featureExtractor->compute(image, keypoints, descriptors);
// If you would like to draw the detected keypoint just to check
Mat outputImage;
Scalar keypointColor = Scalar(255, 0, 0); // Blue keypoints.
drawKeypoints(image, keypoints, outputImage, keypointColor, DrawMatchesFlags::DEFAULT);
namedWindow("Output");
imshow("Output", outputImage);
char c = ' ';
while ((c = waitKey(0)) != 'q'); // Keep window there until user presses 'q' to quit.
return 0;
}
The reason using the factory methods is flexible because now you can change to a different keypoint detector or feature extractor e.g. SURF simply by changing the argument passed to the "create" factory methods like this:
Ptr<FeatureDetector> featureDetector = FeatureDetector::create("SURF");
Ptr<DescriptorExtractor> featureExtractor = DescriptorExtractor::create("SURF");
For other possible arguments to pass to create other detectors or extractors see: http://opencv.itseez.com/modules/features2d/doc/common_interfaces_of_feature_detectors.html#featuredetector-create
Now, using the factory methods means you gain the convenience of not having to guess some suitable parameters to pass to each of the detectors or extractors. This can be convenient for people new to using them. However, if you would like to create your own custom SIFT detector, you can wrap the SiftDetector object created with custom parameters and wrap it into a smart pointer and refer to it using the featureDetector smart pointer variable as above.

- 10,031
- 4
- 47
- 55
-
-
I forgot to add - in case it is not obvious to some - that you can mix and match the detectors and extractors. For e.g., you could have a SIFT detector and a SURF extractor so that you end up extracting SURF descriptors at SIFT keypoints and vice versa. Follow the link above to see the other available detectors and extractors. – lightalchemist May 15 '12 at 01:08
-
Hi lightalchemist, Can we change the dimension of the descriptor from 128 to may be 64 or something like that? – MMH Mar 11 '14 at 07:04
-
1In 2.4 beta SURF and SIFT were moved to nonfree module to indicate possible legal issues of using those algorithms in user applications. So "opencv2/nonfree/nonfree.hpp" has to be included now. They were also inherited from cv::Algorithm, so cv::initModule_nonfree() has to be called before using them to avoid the problem with SURF, SIFT algorithms registration. refer link1: http://answers.opencv.org/question/411/feature-detector-crash/ cv::initModule_nonfree();//THIS LINE IS IMPORTANT refer link2: http://stackoverflow.com/questions/11175794/opencv-surf-function-is-not-implemented – user2301281 Nov 27 '14 at 15:46
A simple example using SIFT nonfree feature detector in opencv 2.4
#include <opencv2/opencv.hpp>
#include <opencv2/nonfree/nonfree.hpp>
using namespace cv;
int main(int argc, char** argv)
{
if(argc < 2)
return -1;
Mat img = imread(argv[1]);
SIFT sift;
vector<KeyPoint> key_points;
Mat descriptors;
sift(img, Mat(), key_points, descriptors);
Mat output_img;
drawKeypoints(img, key_points, output_img);
namedWindow("Image");
imshow("Image", output_img);
waitKey(0);
destroyWindow("Image");
return 0;
}

- 1,374
- 2
- 24
- 39
OpenCV provides SIFT and SURF (here too) and other feature descriptors out-of-the-box.
Note that the SIFT algorithm is patented, so it may be incompatible with the regular OpenCV use/license.

- 16,743
- 5
- 67
- 137
Another simple example using SIFT nonfree feature detector in opencv 2.4 Be sure to add the opencv_nonfree240.lib dependency
#include "cv.h"
#include "highgui.h"
#include <opencv2/nonfree/nonfree.hpp>
int main(int argc, char** argv)
{
cv::Mat img = cv::imread("image.jpg");
cv::SIFT sift(10); //number of keypoints
cv::vector<cv::KeyPoint> key_points;
cv::Mat descriptors, mascara;
cv::Mat output_img;
sift(img,mascara,key_points,descriptors);
drawKeypoints(img, key_points, output_img);
cv::namedWindow("Image");
cv::imshow("Image", output_img);
cv::waitKey(0);
return 0;
}

- 31
- 1
in case someone is wondering how to do it with 2 images :
import numpy as np
import cv2
print ('Initiate SIFT detector')
sift = cv2.xfeatures2d.SIFT_create()
print ('find the keypoints and descriptors with SIFT')
gcp1, des1 = sift.detectAndCompute(src_img,None)
gcp2, des2 = sift.detectAndCompute(trg_img,None)
# create BFMatcher object
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
matches = bf.match(des1,des2)
# Sort them in the order of their distance.
matches = sorted(matches, key = lambda x:x.distance)
#print only the first 100 matches
img3 = drawMatches(src_img, gcp1, trg_img, gcp2, matches[:100])

- 1,329
- 1
- 11
- 12