Does anyone happen to know why the OpenCV 2 DescriptorMatcher::radiusMatch()
and knnMatch()
take a vector<vector<DMatch>>& matches
? I'm a bit confused about why it wouldn't just a vector, since it's just a single array of points in the scene that correspond to the training image, right?
I've got something like this:
void getMatchingPoints(
const vector<vector<cv::DMatch> >& matches,
const vector<cv::KeyPoint>& keyPtsTemplates,
const vector<cv::KeyPoint>& keyPtsScene,
vector<Vec2f>& ptsTemplate,
vector<Vec2f>& ptsScene
)
{
ptsTemplate.clear();
ptsScene.clear();
for (size_t k = 0; k < matches.size(); k++)
{
for (size_t i = 0; i < matches[k].size(); i++)
{
const cv::DMatch& match = matches[k][i];
ptsScene.push_back(fromOcv(keyPtsScene[match.queryIdx].pt));
ptsTemplate.push_back(fromOcv(keyPtsTemplates[match.trainIdx].pt));
}
}
}
but I'm a bit confused about how to actually map the approx. location of the object once I have them all in ptsScene
. The points seem scattered to me when I just draw them, so I think I'm missing what the nested vectors represent.