I have written a program that uses goodFeaturesToTrack and calcOpticalFlowPyrLK to track features from frame to frame. The program reliably works and can estimate the optical flow in the preview image on an Android camera from the previous frame. Here's some snippets that describe the general process:
goodFeaturesToTrack(grayFrame, corners, MAX_CORNERS, quality_level,
min_distance, cv::noArray(), eig_block_size, use_harris, 0.06);
...
if (first_time == true) {
first_time = false;
old_corners = corners;
safe_corners = corners;
mLastImage = grayFrame;
} else {
if (old_corners.size() > 0 && corners.size() > 0) {
safe_corners = corners;
calcOpticalFlowPyrLK(mLastImage, grayFrame, old_corners, corners,
status, error, Size(21, 21), 5,
TermCriteria(TermCriteria::COUNT + TermCriteria::EPS, 30,
0.01));
} else {
//no features found, so let's start over.
first_time = true;
}
}
The code above runs over and over again in a loop where a new preview frame is grabbed at each iteration. Safe_corners, old_corners, and corners are all arrays of class vector < Point2f > . The above code works great.
Now, for each feature that I've identified, I'd like to be able to assign some information about the feature... number of times found, maybe a descriptor of the feature, who knows... My first approach to doing this was:
class Feature: public Point2f {
private:
//things about a feature that I want to track
public:
//getters and fetchers and of course:
Feature() {
Point2f();
}
Feature(float a, float b) {
Point2f(a,b);
}
}
Next, all of my outputArrays are changed from vector < Point2f > to vector < Feature > which in my own twisted world ought to work because Feature is defined to be a descendent class of Point2f. Polymorphism applied, I can't imagine any good reason why this should puke on me unless I did something else horribly wrong.
Here's the error message I get.
OpenCV Error: Assertion failed (func != 0) in void cv::Mat::convertTo(cv::OutputArray, int, double, double) const, file /home/reports/ci/slave50-SDK/opencv/modules/core/src/convert.cpp, line 1095
So, my question to the forum is do the OpenCV functions truly require a Point2f vector or will a descendant class of Point2f work just as well? Next step would be to get gdb working with mobile code on an the Android phone and seeing more precisely where it crashes, however I don't want to go down that road if my approach is fundamentally flawed.
Alternatively, if a feature is tracked across multiple frames using the approach above, does the address in memory for each point change?
Thanks in advance.