5

I use Sift/Surf and ORB but sometimes i have a problem with the drawMatch function.

Here the error :

OpenCV Error: Assertion failed (i2 >= 0 && i2 < static_cast(keypoints2.size())) in drawMatches, file /home/opencv-2.4.6.1/modules/features2d/src/draw.cpp, line 208 terminate called after throwing an instance of 'cv::Exception' what(): /home/opencv-2.4.6.1/modules/features2d/src/draw.cpp:208: error: (-215) i2 >= 0 && i2 < static_cast(keypoints2.size()) in function drawMatche

The code :

drawMatchPoints(img1,keypoints_img1,img2,keypoints_img2,matches);

I tried to invert img 1,keypoints_img1 with img2 and keypoints_img2 like that :

drawMatchPoints(img2,keypoints_img2,img1,keypoints_img1,matches);

Corresponding to my function who is doing an homography :

void drawMatchPoints(cv::Mat image1,std::vector<KeyPoint> keypoints_img1,
                                      cv::Mat image2,std::vector<KeyPoint> keypoints_img2,std::vector<cv::DMatch> matches){

    cv::Mat img_matches;
    drawMatches( image1, keypoints_img1, image2, keypoints_img2,
                         matches, img_matches, Scalar::all(-1), Scalar::all(-1),
                         vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
            std::cout << "Number of good matching " << (int)matches.size() << "\n" << endl;



            //-- Localize the object
            std::vector<Point2f> obj;
            std::vector<Point2f> scene;

            for( int i = 0; i < matches.size(); i++ )
            {
              //-- Get the keypoints from the good matches
              obj.push_back( keypoints_img1[ matches[i].queryIdx ].pt );
              scene.push_back( keypoints_img2[matches[i].trainIdx ].pt );
            }

            Mat H = findHomography( obj, scene, CV_RANSAC );
            std::cout << "Size of homography " << *H.size << std::endl ;

            //-- Get the corners from the image_1 ( the object to be "detected" )
            std::vector<Point2f> obj_corners(4);
            obj_corners[0] = cvPoint(0,0); obj_corners[1] = cvPoint( image1.cols, 0 );
            obj_corners[2] = cvPoint( image1.cols, image1.rows ); obj_corners[3] = cvPoint( 0, image1.rows );
            std::vector<Point2f> scene_corners(4);


            perspectiveTransform( obj_corners, scene_corners, H);


            //-- Draw lines between the corners (the mapped object in the scene - image_2 )
            line( img_matches, scene_corners[0] + Point2f( image1.cols, 0), scene_corners[1] + Point2f( image1.cols, 0), Scalar(0, 255, 0), 4 );
            line( img_matches, scene_corners[1] + Point2f( image1.cols, 0), scene_corners[2] + Point2f( image1.cols, 0), Scalar( 0, 255, 0), 4 );
            line( img_matches, scene_corners[2] + Point2f( image1.cols, 0), scene_corners[3] + Point2f( image1.cols, 0), Scalar( 0, 255, 0), 4 );
            line( img_matches, scene_corners[3] + Point2f( image1.cols, 0), scene_corners[0] + Point2f( image1.cols, 0), Scalar( 0, 255, 0), 4 );

            //-- Show detected matches
            cv::imshow( "Good Matches & Object detection", img_matches );
            cv::waitKey(5000);

}

But i have still the error !

I have noticed that the error happened when the size of my keypoints_img1 is lower than the size of my keypoints_img2 :

Size keyPoint1 : 244 - Size keyPoint2 : 400

So if i invert the loading of my two picture, that works but i can't now in advance if my first picture will have more keypoints that my second picture...

My code (the most important step) in order to create the features :

init_Sift(400,5,0.04,25,1.6);
void init_Sift(int nf,int nOctaveL,double contrastThresh, double edgeThresh,double sigma){
this->nfeatureSift=nf;
this->nOctaveLayerSift=nOctaveL;
this->contrastThresholdSift=contrastThresh;
this->edgeThresholdSift=edgeThresh;
this->sigmaSift=sigma;}



 cv::FeatureDetector* detector=new SiftFeatureDetector(nfeatureSift,nOctaveLayerSift,contrastThresholdSift,edgeThresholdSift,sigmaSift);
cv::DescriptorExtractor* extractor=new SiftDescriptorExtractor

extractor->compute( image, keypoints, descriptors );

The matching part :

    std::cout << "Type of matcher : " << type_of_matcher << std::endl;
if (type_of_matcher=="FLANN" || type_of_matcher=="BF"){
    std::vector<KeyPoint> keypoints_img1 = keyfeatures.compute_Keypoints(img1);
    std::vector<KeyPoint> keypoints_img2 = keyfeatures.compute_Keypoints(img2);

    cv::Mat descriptor_img1 = keyfeatures.compute_Descriptors(img1);
    cv::Mat descriptor_img2 = keyfeatures.compute_Descriptors(img2);

    std::cout << "Size keyPoint1 " << keypoints_img1.size() << "\n" << std::endl;
    std::cout << "Size keyPoint2 " << keypoints_img2.size() << "\n" << std::endl;

    //Flann with sift or surf
    if (type_of_matcher=="FLANN"){
        Debug::info("USING Matcher FLANN");
        fLmatcher.match(descriptor_img1,descriptor_img2,matches);

        double max_dist = 0; double min_dist = 100;

        //-- Quick calculation of max and min distances between keypoints
        for( int i = 0; i < descriptor_img1.rows; i++ ){
            double dist = matches[i].distance;
            if( dist < min_dist ) min_dist = dist;
            if( dist > max_dist ) max_dist = dist;
         }

        std::vector< DMatch > good_matches;

          for( int i = 0; i < descriptor_img1.rows; i++ )
          { if( matches[i].distance <= max(2*min_dist, 0.02) )
            { good_matches.push_back( matches[i]); }
          }

          std::cout << "Size of good match : " <<  (int)good_matches.size() << std::endl;
          //-- Draw only "good" matches
          if (!good_matches.empty()){
              drawMatchPoints(img1,keypoints_img1,img2,keypoints_img2,good_matches);

          }
          else {
              Debug::error("Flann Matcher : Pas de match");
              cv::Mat img_matches;
              drawMatches( img1, keypoints_img1, img2, keypoints_img2,
                                matches, img_matches, Scalar::all(-1), Scalar::all(-1),
                                vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
              cv::imshow( "No match", img_matches );
              cv::waitKey(5000);
          }

    }
    //BruteForce with sift or surf
    else if (type_of_matcher=="BF"){
        Debug::info("USING Matcher Brute Force");

        bFmatcher.match(descriptor_img1,descriptor_img2,matches);
        if (!matches.empty()){
            std::nth_element(matches.begin(),//Initial position
                             matches.begin()+24, //Position  of the sorted element
                             matches.end());//End position
            matches.erase(matches.begin()+25,matches.end());

            drawMatchPoints(img1,keypoints_img1,img2,keypoints_img2,matches);
            //drawMatchPoints(img2,keypoints_img2,img1,keypoints_img1,matches);
        }
        else {
            Debug::error("Brute Force matcher  : Pas de match");
            cv::Mat img_matches;
            drawMatches( img1, keypoints_img1, img2, keypoints_img2,
                              matches, img_matches, Scalar::all(-1), Scalar::all(-1),
                              vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
            cv::imshow( "No match", img_matches );
            cv::waitKey(5000);

        }

}

Do you have any suggestions or advises ?

EDIT : i solved my problem. I had a c++ problem, because i had two classes . One about matching and another about finding keyFeature. I have wrote on my .h std::vector and the same for descriptors.

class keyFeatures{

public:
...   
std::vector<keyPoint> keypoints;
...

I delete this attribute and i did a function that take in argument std::vector keypoints

cv::Mat descriptor_img1 = keyfeatures.compute_Descriptors(img1,keypoints_img1);

instead of

cv::Mat descriptor_img1 = keyfeatures.compute_Descriptors(img1);

I think there was a conflict when i did the matching... But i don't know why i shouldn't have to write it on my .h and do a local parameter on my function.

Thank you !

lilouch
  • 1,054
  • 4
  • 23
  • 43
  • As far as I know there is no such thing as cv::drawMatchPoints().There is however cv::drawMatches(). Can you provide more info and code? The difference in the number of keypoints in both images should not make a problem since cv::drawMatches() uses the matching data to display the actual matches. All of the remaining keypoints are simply drawn using cv::drawKeypoints() as you can see from the source code at https://github.com/Itseez/opencv/blob/master/modules/features2d/src/draw.cpp#L169 and #L189. – rbaleksandar Jul 03 '14 at 12:10
  • As for the not knowing in advance part - you actually do know the size of each keypoint-vector for each image BEFORE you call cv::drawMatches() (otherwise you won't be able to call it ;)). As an alternative solution (although still not explaining the problem at hand) you can check the sizes of both keypoint-vectors and swap their places if necessary. The same issue can be seen at http://answers.opencv.org/question/12048/drawmatches-bug/ and it seems that swapping should have resolved the problem. That is why I also asked for more code in the first comment - the matching procedure in particular – rbaleksandar Jul 03 '14 at 12:14
  • Yeah sorry,i've edited my post. Actually drawMatchPoint is a function i created which contains the function cv::drawMatches(). But i tried to exchange the two parameters but it doesn't work. I have the same error. Concerning the link about answer.opencv , i have already took a look... Thank ! – lilouch Jul 03 '14 at 12:22
  • Hmm, what kind of matcher are you using? Also can you actually show the exact code where you create your matches-vector? I just checked with my own code where I use ORB for stitching aerial image together and I do a cross-matching for each image with all of the rest. I set my ORB to try to detect 600 features per image. Some returned 600 but some less (569 for example), which automatically falls in your situation however no such errors occurred and all went as planned. – rbaleksandar Jul 03 '14 at 13:04
  • A tip for better output: you can use the mask generated by findHomography() with RANSAC in drawMatches() to actually show the RANSACed points and their corresponding matches. Displaying results should ALWAYS be your final step and not the first one. – rbaleksandar Jul 03 '14 at 13:06
  • I used ORB,SURF and SIFT but i'll keep SIFT. I've edited my post in order to show you how i do that. For you tip,i didn't undertstand well, can you explain me again please ? And do you process that ? Many thanks ! – lilouch Jul 04 '14 at 08:08
  • Still need the matching part. The feature extraction in this case is not important since your issue is somehow connected to their difference in number and not in the features themselves. As for my tip: findHomography() as well as findFundamentalMat() have a last parameter called "mask", which has a cv::noArray() default value. If you pass your own matrix, the information stored in there can be used by cv::drawMatches() (this time mask is one position before the last parameter, which is DrawMatchesFlag) to draw lines only between the RANSACed keypoints (this is not the only application though). – rbaleksandar Jul 04 '14 at 09:39
  • SOrry, okey i put the matching part. Okey so i have to do something like that : cv::Mat arrayRansac; Mat H = findHomography( obj, scene, CV_RANSAC,3,arrayRansac); And after using the function drawMatch by remplacing the vector() parameter ? Thanks – lilouch Jul 04 '14 at 09:51
  • Yes, that's correct. cv::Mat is de facto a vector of vectors of some type (in this case - char). As for your code I see some mistakes and the error might come from there. First thing's first always be careful when you do additional filtering of your matches ESPECIALLY when you apply RANSAC later on them. The way RANSAC works is that it requires both faulty data and correct one. If you filter out most (or even all) of the bad matches (in your case using min/max distance) RANSAC receives a set of mostly good matches => unstable results or even failure to compute the homography. – rbaleksandar Jul 04 '14 at 18:58
  • Second - can you elaborate what exactly is your idea behind the code when matches is empty? If matches is indeed empty why thinker with the vector at all? Third - the else block seems totally wrong. You create img_matches there and pass it to cv::drawMatches() in the very next line but img_matches is empty! Fourth - just noticed the way you pass your arguments to your function. Try passing by reference and not by value by adding & (amp). No need to pass a copy. Here is a question: did you try debugging your code at all? Please do that and check exactly what circumstances cause this error. – rbaleksandar Jul 04 '14 at 19:06
  • Also note the error you get: i2 >= 0 && i2 < static_cast(keypoints2.size()). This means since it's a logical AND if either or both of those terms are false the whole thing fails. I sort of thing that the >=0 is somehow responsible for your problem but I might be wrong. That is why you need to debug and post your results in your question. – rbaleksandar Jul 04 '14 at 19:07
  • One more thing about the good matches - http://stackoverflow.com/questions/24456788/opencv-how-to-get-inlier-points-using-findhomography-findfundamental-and-ra/24456789 You can use the mask to obtain the good matches (in my case there I was interested in the points) since each cell inside the masked marked with a 1 represents a good match between two points that has passed the RANSAC procedure. You can use the index of that cell and extract the corresponding good match from your matches-vector. – rbaleksandar Jul 05 '14 at 07:14
  • Thank you for your complete answer ! I actually begin with openCv,that's why maybe i do dome mistake. I'll take a look and learn how to debug in order to see where is the problem (maybe you have suggestions about it) ! Howewer i've printed all hte parameters before passing them to the drawMatche functions but there are not empty. just before the drawMatch function Size of keypoints_img1 : 84 Size of keypoints_img2 : 400 Matches size : 400 concerning if the matches is empty, i did that just to show the two pictures together with no matches...(So there are not lines between the two pictures) – lilouch Jul 07 '14 at 09:05
  • Ok, still the thing about debugging remains. ;) As for the dumb mistakes - they are not unless you don't learn from them. For the debugging part OpenCV is just a library therefore you can use your knowledge for previous C/C++ programs you've written and debugged to see what's going on under the surface. Sorry I can't with more for now. I'm perplexed myself as to why this is happening especially considering that I've tested the theory with different keypoints in my own code and it's working just fine. – rbaleksandar Jul 07 '14 at 15:09
  • i've solved my problem...If you know why, can you explain ? see the end of my post ! – lilouch Jul 07 '14 at 15:41
  • Wow, I have just noticed that you are using custom functions for everything - detecting keypoints, computing descriptors. This is very valuable information and I missed it, my bad. I cannot tell you why this solved your problem since I don't know how detect_Keypoints() and compute_Descriptors() works in your case.One bit of advice though - the way you do things in your code is very confusing and hard to read. The issue with using such custom routines in your OpenCV application is that someone familiar with OpenCV will make a certain assumption about what your functions do,which might be wrong. – rbaleksandar Jul 07 '14 at 19:32
  • yeah i did customs function because as i've implemented ORB,SURF and SIFT with differents matching, i wanted to try to do something well by creating two classes... If you have time,can i send you my code (not so long) and tell me what can i improve ? Because now, if you told me that i shouldn't have to do customs functions,it will be a bit mess no ? – lilouch Jul 08 '14 at 07:22
  • Sorry, pretty busy right now. However my advice here is as follows: if you write custom functions (not uncommon practice depending on the task at hand) make sure you include detailed information about those when you post a question. Otherwise people who try to help you have no idea what they are looking at. This applies especially IF you do some additional processing in those functions (in your case here you ALSO filter the good matches!). As we have seen here the problem lies in a custom function you've coded. Write in detail BUT use only simple examples that show how your code works. – rbaleksandar Jul 08 '14 at 10:39
  • Okey i got it ! Many thank for your patience and your help ! – lilouch Jul 09 '14 at 09:34

1 Answers1

6

For somebody like me who searched for this but couldn't find the solution.

Assertion failed (i2 >= 0 && i2 < static_cast(keypoints2.size()))

This means that assertion failed due to i2 being less than 0 or i2 being less than the keypoints2 size. But what is i2?

From the link that rbaleksandar provided in a comment

int i2 = matches1to2[m].trainIdx;

trainIdx here is an index in keypoints2. The check i2 < static_cast(keypoints2.size()) makes sure that the index is less than keypoints2.size().

For me it occured because I discarded some keypoints before calling drawMatches but after the descriptors were computed i.e. DescriptorExtractor#compute was called. This meant that the drawMatches referred to old keypoints through the descriptors while I changed those keypoints. The end result was that some keypoints had large idx but the keypoints size was small hence the error.

akhalid7
  • 61
  • 2
  • 4