0

I tried to extract SIFT key points. It is working fine for a sample image I downloaded (height 400px width 247px horizontal and vertical resolutions 300dpi). Below image shows the extracted points.

enter image description here

Then I tried to apply the same code to a image that was taken and edited by me (height 443px width 541px horizontal and vertical resolutions 72dpi).

enter image description here

To create the above image I rotated the original image then removed its background and resized it using Photoshop, but my code, for that image doesn't extract features like in the first image.

See the result :

enter image description here

It just extract very few points. I expect a result as in the first case. For the second case when I'm using the original image without any edit the program gives points as the first case. Here is the simple code I have used

#include<opencv\cv.h>
#include<opencv\highgui.h>
#include<opencv2\nonfree\nonfree.hpp>

using namespace cv;

int main(){

Mat src, descriptors,dest;
vector<KeyPoint> keypoints;

src = imread(". . .");


cvtColor(src, src, CV_BGR2GRAY);

SIFT sift;
sift(src, src, keypoints, descriptors, false);
drawKeypoints(src, keypoints, dest);
imshow("Sift", dest);
cvWaitKey(0);
return 0;
}

What I'm doing wrong here? what do I need to do to get a result like in the first case to my own image after resizing ?

Thank you!

Grant
  • 4,413
  • 18
  • 56
  • 82

1 Answers1

6

Try set nfeatures parameter (may be other parameters also need adjustment) in SIFT constructor.

Here is constructor definition from reference:

SIFT::SIFT(int nfeatures=0, int nOctaveLayers=3, double contrastThreshold=0.04, double edgeThreshold=10, double sigma=1.6)

Your code will be:

#include<opencv\cv.h>
#include<opencv\highgui.h>
#include<opencv2\nonfree\nonfree.hpp>

using namespace cv;
using namespace std;
int main(){

Mat src, descriptors,dest;
vector<KeyPoint> keypoints;

src = imread("D:\\ImagesForTest\\leaf.jpg");


cvtColor(src, src, CV_BGR2GRAY);

SIFT sift(2000,3,0.004);
sift(src, src, keypoints, descriptors, false);
drawKeypoints(src, keypoints, dest);
imshow("Sift", dest);
cvWaitKey(0);
return 0;
}

The result:

enter image description here

Dense sampling example:

#include <iostream>
#include <opencv2/opencv.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/features2d/features2d.hpp>
#include "opencv2/nonfree/nonfree.hpp"

int main(int argc, char* argv[])
{
    cv::initModule_nonfree();
    cv::namedWindow("result");
    cv::Mat bgr_img = cv::imread("D:\\ImagesForTest\\lena.jpg");
    if (bgr_img.empty()) 
    {
        exit(EXIT_FAILURE);
    }
    cv::Mat gray_img;
    cv::cvtColor(bgr_img, gray_img, cv::COLOR_BGR2GRAY);
    cv::normalize(gray_img, gray_img, 0, 255, cv::NORM_MINMAX);
    cv::DenseFeatureDetector detector(12.0f, 1, 0.1f, 10);
    std::vector<cv::KeyPoint> keypoints;
    detector.detect(gray_img, keypoints);
    std::vector<cv::KeyPoint>::iterator itk;
    for (itk = keypoints.begin(); itk != keypoints.end(); ++itk) 
    {
        std::cout << itk->pt << std::endl;
        cv::circle(bgr_img, itk->pt, itk->size, cv::Scalar(0,255,255), 1, CV_AA);
        cv::circle(bgr_img, itk->pt, 1, cv::Scalar(0,255,0), -1);
    }
    cv::Ptr<cv::DescriptorExtractor> descriptorExtractor = cv::DescriptorExtractor::create("SURF");
    cv::Mat descriptors;
    descriptorExtractor->compute( gray_img, keypoints, descriptors);
    // SIFT returns large negative values when it goes off the edge of the image.
    descriptors.setTo(0, descriptors<0);
    imshow("result",bgr_img);
    cv::waitKey();
    return 0;
}

The result:

enter image description here

Andrey Smorodov
  • 10,649
  • 2
  • 35
  • 42
  • I tried to use this but the result same as the previous one. – Grant Jul 25 '14 at 19:41
  • 1
    Try to lower contrast threshold. Also you could try use other detector, i.e. FAST or Harris\Hessian and leave SIFT for description. – old-ufo Jul 25 '14 at 19:54
  • Thanks for your support it works ok now. As old-ufo said I tested images with FAST corner detection algorithm then I can extract more key points than sift. I want to know that using key points given by FAST algorithm can I accurately categorize objects ? – Grant Jul 26 '14 at 05:06
  • I think yes you can, but I think It'll be better to use dense features sampling. I've added the code to my answer. – Andrey Smorodov Jul 26 '14 at 05:41
  • Can you please bit explain how to use the dense features to object categorization ? – Grant Jul 26 '14 at 06:01
  • Build an input vector from all the extracted features and feed it to classificator (neural network, SVM, etc.). Input vector is row vector consisted of all features stacked one by one ( one SURF feature gives you 128 values), so dimension of your vector should be 128*n_features. First you need to train classificator of course. – Andrey Smorodov Jul 26 '14 at 06:14
  • I cannot understand what is this yellow coloured grid ? – Grant Jul 26 '14 at 07:23
  • 1
    It's keypoints. Centers are small green circles. Keypoint sizes (area within feature computed ) are large yellow circles. Features are overlapped it usually gives better results. About SURF size you can read here: http://stackoverflow.com/questions/10328298/what-does-size-and-response-exactly-represent-in-a-surf-keypoint# – Andrey Smorodov Jul 26 '14 at 09:07
  • Thanks Andrey I'll research on this. – Grant Jul 26 '14 at 09:32