5

I am making a program that tracks features with ORB from OpenCV (2.43) I followed this tutorial and used advice from here.

My goal is to track the object in video feed (face) and draw a rectangle around it.

My program finds keypoints and matches them correctly, but when I try to use findHomography + perspectiveTransform to find new corners for the image usually returns some nonsense type values (though sometimes it returns correct homography).

Here is an example picture: example

Here is the corresponding problematic part:

Mat H = findHomography( obj, scene, CV_RANSAC );  

//-- Get the corners from the image_1 ( the object to be "detected" )
std::vector<Point2f> obj_corners(4);
obj_corners[0] = cvPoint(0,0); obj_corners[1] = cvPoint( img_object.cols, 0 );
obj_corners[2] = cvPoint( img_object.cols, img_object.rows ); obj_corners[3] = cvPoint( 0, img_object.rows );
std::vector<Point2f> scene_corners(4);

perspectiveTransform( obj_corners, scene_corners, H);

//-- Draw lines between the corners (the mapped object in the scene - image_2 )
line( img_matches, scene_corners[0] + Point2f( img_object.cols, 0), scene_corners[1] + Point2f( img_object.cols, 0), Scalar(0, 255, 0), 4 );
line( img_matches, scene_corners[1] + Point2f( img_object.cols, 0), scene_corners[2] + Point2f( img_object.cols, 0), Scalar( 0, 255, 0), 4 );
line( img_matches, scene_corners[2] + Point2f( img_object.cols, 0), scene_corners[3] + Point2f( img_object.cols, 0), Scalar( 0, 255, 0), 4 );
line( img_matches, scene_corners[3] + Point2f( img_object.cols, 0), scene_corners[0] + Point2f( img_object.cols, 0), Scalar( 0, 255, 0), 4 );

Rest of the code is practically the same as in the links I provided. The lines drawn seem completley random, my goal is only to get minimal rectangle of the source object in new scene, so if there is alternative to using homography that works too.

P.S. Source image to track is a region that is copied from video input and then tracked in new pictures from that input, does it matter?

Community
  • 1
  • 1
user2184001
  • 51
  • 1
  • 4
  • Providing more information e.g. images of the output would be useful. – Jacob Parker Mar 18 '13 at 21:22
  • well it's something like here: http://dl.dropbox.com/u/5481096/Clipboard02.jpg. – user2184001 Mar 18 '13 at 21:47
  • I edited the image into your question. If you want to provide more information in the future it is best to edit your question so that more people can easily see it. – Jacob Parker Mar 18 '13 at 21:55
  • Seems like your points are very close along a line. Estimating a homography is not possible in that case. – Ela782 Nov 23 '14 at 22:35
  • Why you don't use Viola Jones to tracking faces instead of feature descriptor ? You can check this topic: http://stackoverflow.com/questions/5808434/how-does-the-viola-jones-face-detection-method-work – flaviussn Apr 05 '16 at 11:23

1 Answers1

0

The function perspectiveTransform estimates the homography under the assumption that your corresponding points set are not error prone. However, in real world data you cannot assume that. The solution is to use a robust estimation function such as the RANSAC to solve the homography problem as an overdetermine system of equations.

You can use the findHomography function instead which returns a homography. The input of this function is a set of points. This set needs at least 4 point but a larger set is better. The homography is only an estimate but which is more robust against errors. By using the CV_RANSAC flag it is able to remove outliers (wrong point correspondences) internaly.

Tobias Senst
  • 2,665
  • 17
  • 38