After getting no answer to this question, I ended up coming across some interesting looking possible solutions:
The Robust Matcher from this post, as well as the Canny Detector from this post.
After setting up a Canny Edge Detector
, referencing its Documentation, and implementing the Robust Matcher
shown in the first page I linked, I acquired some logo/clothing images and had some decent success with the two combined:
But in other very similar cases, it was off:
Different logo image with "exact" same design, same clothing image as above.
So that got me wondering, is there a way of matching several specific points on an image that define certain areas of the image given?
So instead of having the image read in and then doing all the matching of the the keypoints
, discarding "bad" keypoints
etc. Is it possible to have the system know where one keypoint
is in relation to another and then discard the matches that on one image are right next to each other, but are in completely different places on the other?
(as shown with the light blue and royal blue "matches" that are right next to each other in the left image, but match off in completely separate parts of the right image)
EDIT
For Micka
"Rectangle" is drawn in the center of the (added on in paint) White Box.
cv::Mat ransacTest(const std::vector<cv::DMatch>& matches, const std::vector<cv::KeyPoint>& trainKeypoints, const std::vector<cv::KeyPoint>& testKeypoints, std::vector<cv::DMatch>& outMatches){
// Convert keypoints into Point2f
std::vector<cv::Point2f> points1, points2;
cv::Mat fundemental;
for (std::vector<cv::DMatch>::const_iterator it= matches.begin(); it!= matches.end(); ++it){
// Get the position of left keypoints
float x= trainKeypoints[it->queryIdx].pt.x;
float y= trainKeypoints[it->queryIdx].pt.y;
points1.push_back(cv::Point2f(x,y));
// Get the position of right keypoints
x= testKeypoints[it->trainIdx].pt.x;
y= testKeypoints[it->trainIdx].pt.y;
points2.push_back(cv::Point2f(x,y));
}
// Compute F matrix using RANSAC
std::vector<uchar> inliers(points1.size(), 0);
if (points1.size() > 0 && points2.size() > 0){
cv::Mat fundemental= cv::findFundamentalMat(
cv::Mat(points1),cv::Mat(points2), inliers, CV_FM_RANSAC, distance, confidence);
// matching points - match status (inlier or outlier) - RANSAC method - distance to epipolar line - confidence probability - extract the surviving (inliers) matches
std::vector<uchar>::const_iterator itIn= inliers.begin();
std::vector<cv::DMatch>::const_iterator itM= matches.begin();
// for all matches
for ( ;itIn!= inliers.end(); ++itIn, ++itM){
if (*itIn) { // it is a valid match
outMatches.push_back(*itM);
}
}
if (refineF){
// The F matrix will be recomputed with
// all accepted matches
// Convert keypoints into Point2f
// for final F computation
points1.clear();
points2.clear();
for(std::vector<cv::DMatch>::const_iterator it = outMatches.begin(); it!= outMatches.end(); ++it){
// Get the position of left keypoints
float x = trainKeypoints[it->queryIdx].pt.x;
float y = trainKeypoints[it->queryIdx].pt.y;
points1.push_back(cv::Point2f(x,y));
// Get the position of right keypoints
x = testKeypoints[it->trainIdx].pt.x;
y = testKeypoints[it->trainIdx].pt.y;
points2.push_back(cv::Point2f(x,y));
}
// Compute 8-point F from all accepted matches
if (points1.size() > 0 && points2.size() > 0){
fundemental= cv::findFundamentalMat(cv::Mat(points1),cv::Mat(points2), CV_FM_8POINT); // 8-point method
}
}
}
Mat imgMatchesMat;
drawMatches(trainCannyImg, trainKeypoints, testCannyImg, testKeypoints, outMatches, imgMatchesMat);//, Scalar::all(-1), Scalar::all(-1), vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);
Mat H = findHomography(points1, points2, CV_RANSAC, 3); // -- Little difference when CV_RANSAC is changed to CV_LMEDS or 0
//-- Get the corners from the image_1 (the object to be "detected")
std::vector<Point2f> obj_corners(4);
obj_corners[0] = cvPoint(0,0); obj_corners[1] = cvPoint(trainCannyImg.cols, 0);
obj_corners[2] = cvPoint(trainCannyImg.cols, trainCannyImg.rows); obj_corners[3] = cvPoint(0, trainCannyImg.rows);
std::vector<Point2f> scene_corners(4);
perspectiveTransform(obj_corners, scene_corners, H);
//-- Draw lines between the corners (the mapped object in the scene - image_2 )
line(imgMatchesMat, scene_corners[0] + Point2f(trainCannyImg.cols, 0), scene_corners[1] + Point2f(trainCannyImg.cols, 0), Scalar(0, 255, 0), 4);
line(imgMatchesMat, scene_corners[1] + Point2f(trainCannyImg.cols, 0), scene_corners[2] + Point2f(trainCannyImg.cols, 0), Scalar(0, 255, 0), 4);
line(imgMatchesMat, scene_corners[2] + Point2f(trainCannyImg.cols, 0), scene_corners[3] + Point2f(trainCannyImg.cols, 0), Scalar(0, 255, 0), 4);
line(imgMatchesMat, scene_corners[3] + Point2f(trainCannyImg.cols, 0), scene_corners[0] + Point2f(trainCannyImg.cols, 0), Scalar(0, 255, 0), 4);
//-- Show detected matches
imshow("Good Matches & Object detection", imgMatchesMat);
waitKey(0);
return fundemental;
}
Homography Output
Slightly different input scenario (constantly changing things around, would take too long to figure out exact conditions to repeat image above perfectly) but same outcome:
Object (52, 37)
Scene (219, 151)
Object (49, 47)
Scene (241,139)
Object (51, 50)
Scene (242, 141)
Object (37, 53)
Scene (228, 145)
Object (114, 37.2)
Scene (281, 162)
Object (48.96, 46.08)
Scene (216, 160.08)
Object (44.64, 54.72)
Scene (211.68, 168.48)
Image in question: