I have a program that takes for input a picture, and who's objective is to determine if a certain object (essentially an image) is contained with this picture. If so it tries to estimate it's position. This works really well when the object is in the picture. However I get a lot of false positives when I put anything complex enough in the picture.
I was wondering if there is any good way to filter out these false positives. Hopefully something not too computationally expensive.
My program is based on the tutorial found here. Except I use BRISK
instead of SURF
so I don't need the contrib stuff.
HOW I GET MATCHES
descriptorMatcher->match(descImg1, descImg2, matches, Mat());
GOOD MATCHES
double max_dist = 0; double min_dist = 100;
//-- Quick calculation of max and min distances between keypoints
for( int i = 0; i < descImg1.rows; i++ )
{ double dist = matches[i].distance;
if( dist < min_dist ) min_dist = dist;
if( dist > max_dist ) max_dist = dist;
}
std::vector< DMatch > good_matches;
for( int i = 0; i < descImg1.rows; i++ )
{ if( matches[i].distance < 4*min_dist )
{ good_matches.push_back( matches[i]); }
}
HOMOGRAPHY
std::vector<Point2f> obj;
std::vector<Point2f> scene;
for( int i = 0; i < good_matches.size(); i++ )
{
//-- Get the keypoints from the good matches
obj.push_back( keyImg1[ good_matches[i].queryIdx ].pt );
scene.push_back( keyImg2[ good_matches[i].trainIdx ].pt );
}
Mat H = findHomography( obj, scene, FM_RANSAC );
OBJECT CORNERS
std::vector<Point2f> obj_corners(4);
obj_corners[0] = cvPoint(0,0); obj_corners[1] = cvPoint( img1.cols, 0 );
obj_corners[2] = cvPoint( img1.cols, img1.rows ); obj_corners[3] = cvPoint( 0, img1.rows );
std::vector<Point2f> scene_corners(4);
perspectiveTransform( obj_corners, scene_corners, H);