I am trying to develop a program to track a license plate. I need to track plates and mark them with ID number so I can call my recognition program for every plate only once. I have a problem with tracking. Once I successfully detect license plate region I create mask to extract features only from plate and then track them on the whole image. I use goodFeaturesToTrack()
and calculate optical flow with calcOpticalFlowPyrLK()
. This is my algorithm:
- Get a frame from camera.
- Find license plate.
- Extract features from license plate region.
- Get next frame from camera.
- Extract features from frame.
- Call calcOpticalFlowPyrLK()
- While we successfully track at least half of features from license plate we do tracking and swap features with next features.
Code(Only part):
bool licensePlate = false;
while(1)
{
frame = cvQueryFrame(cap);
if(frame.empty())
break;
cvtColor(frame, frame, CV_BGR2GRAY);
// We have license plate area
if (licensePlate)
{
frame.copyTo(image_next);
goodFeaturesToTrack(image_next, next_features, 50, 0.01, 0.1);
calcOpticalFlowPyrLK( image_previous, image_next, features, next_features, features_found, err );
swap(features, next_features);
}
// We try to obtain license plate area
if (!licensePlate)
{
squares = findLicensePlate(image);
if (!squares.empty())
{
// We have found license plate area
licensePlate = true;
frame.copyTo(image_previous);
Mat roi (mask, Rect(bb.x, bb.y, bb.width, bb.height));
roi.setTo(255);
goodFeaturesToTrack(image_previous, features, 50, 0.01, 0.1, mask);
mask.setTo(0);
}
}
}
And the output is:
Green points are previous features and red points are actual features.
As you can see, the are somehow tracked good but it looks like they are scaled and running far away from license plate. I want to extract features from license plate only once and then extract features only from frame. My logic is somewhere wrong. What could be a problem here?