8

After getting no answer to this question, I ended up coming across some interesting looking possible solutions:

The Robust Matcher from this post, as well as the Canny Detector from this post.

After setting up a Canny Edge Detector, referencing its Documentation, and implementing the Robust Matcher shown in the first page I linked, I acquired some logo/clothing images and had some decent success with the two combined:

Picture of a logo matching with a picture of an item of clothing with that logo on it

But in other very similar cases, it was off:

Different logo image with "exact" same design, same clothing image as above.

So that got me wondering, is there a way of matching several specific points on an image that define certain areas of the image given?

So instead of having the image read in and then doing all the matching of the the keypoints, discarding "bad" keypoints etc. Is it possible to have the system know where one keypoint is in relation to another and then discard the matches that on one image are right next to each other, but are in completely different places on the other?

(as shown with the light blue and royal blue "matches" that are right next to each other in the left image, but match off in completely separate parts of the right image)

EDIT

For Micka

enter image description here

"Rectangle" is drawn in the center of the (added on in paint) White Box.

cv::Mat ransacTest(const std::vector<cv::DMatch>& matches, const std::vector<cv::KeyPoint>& trainKeypoints, const std::vector<cv::KeyPoint>& testKeypoints, std::vector<cv::DMatch>& outMatches){
// Convert keypoints into Point2f
std::vector<cv::Point2f> points1, points2;
cv::Mat fundemental;
for (std::vector<cv::DMatch>::const_iterator it= matches.begin(); it!= matches.end(); ++it){
    // Get the position of left keypoints
    float x= trainKeypoints[it->queryIdx].pt.x;
    float y= trainKeypoints[it->queryIdx].pt.y;
    points1.push_back(cv::Point2f(x,y));
    // Get the position of right keypoints
    x= testKeypoints[it->trainIdx].pt.x;
    y= testKeypoints[it->trainIdx].pt.y;
    points2.push_back(cv::Point2f(x,y));
}
// Compute F matrix using RANSAC
std::vector<uchar> inliers(points1.size(), 0);
if (points1.size() > 0 && points2.size() > 0){
    cv::Mat fundemental= cv::findFundamentalMat(
    cv::Mat(points1),cv::Mat(points2), inliers, CV_FM_RANSAC, distance, confidence);
    // matching points - match status (inlier or outlier) - RANSAC method - distance to epipolar line - confidence probability - extract the surviving (inliers) matches
    std::vector<uchar>::const_iterator itIn= inliers.begin();
    std::vector<cv::DMatch>::const_iterator itM= matches.begin();
    // for all matches
    for ( ;itIn!= inliers.end(); ++itIn, ++itM){
        if (*itIn) { // it is a valid match
            outMatches.push_back(*itM);
        }
    }
    if (refineF){
        // The F matrix will be recomputed with
        // all accepted matches
        // Convert keypoints into Point2f
        // for final F computation
        points1.clear();
        points2.clear();

        for(std::vector<cv::DMatch>::const_iterator it = outMatches.begin(); it!= outMatches.end(); ++it){
            // Get the position of left keypoints
            float x = trainKeypoints[it->queryIdx].pt.x;
            float y = trainKeypoints[it->queryIdx].pt.y;
            points1.push_back(cv::Point2f(x,y));
            // Get the position of right keypoints
            x = testKeypoints[it->trainIdx].pt.x;
            y = testKeypoints[it->trainIdx].pt.y;
            points2.push_back(cv::Point2f(x,y));
        }

        // Compute 8-point F from all accepted matches
        if (points1.size() > 0 && points2.size() > 0){
            fundemental= cv::findFundamentalMat(cv::Mat(points1),cv::Mat(points2), CV_FM_8POINT); // 8-point method
        }
    }
}

Mat imgMatchesMat;
drawMatches(trainCannyImg, trainKeypoints, testCannyImg, testKeypoints, outMatches, imgMatchesMat);//, Scalar::all(-1), Scalar::all(-1), vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);
Mat H = findHomography(points1, points2, CV_RANSAC, 3); // -- Little difference when CV_RANSAC is changed to CV_LMEDS or 0
//-- Get the corners from the image_1 (the object to be "detected")
std::vector<Point2f> obj_corners(4);
obj_corners[0] = cvPoint(0,0); obj_corners[1] = cvPoint(trainCannyImg.cols, 0);
obj_corners[2] = cvPoint(trainCannyImg.cols, trainCannyImg.rows); obj_corners[3] = cvPoint(0, trainCannyImg.rows);
std::vector<Point2f> scene_corners(4);
perspectiveTransform(obj_corners, scene_corners, H);
//-- Draw lines between the corners (the mapped object in the scene - image_2 )
line(imgMatchesMat, scene_corners[0] + Point2f(trainCannyImg.cols, 0), scene_corners[1] + Point2f(trainCannyImg.cols, 0), Scalar(0, 255, 0), 4);
line(imgMatchesMat, scene_corners[1] + Point2f(trainCannyImg.cols, 0), scene_corners[2] + Point2f(trainCannyImg.cols, 0), Scalar(0, 255, 0), 4);
line(imgMatchesMat, scene_corners[2] + Point2f(trainCannyImg.cols, 0), scene_corners[3] + Point2f(trainCannyImg.cols, 0), Scalar(0, 255, 0), 4);
line(imgMatchesMat, scene_corners[3] + Point2f(trainCannyImg.cols, 0), scene_corners[0] + Point2f(trainCannyImg.cols, 0), Scalar(0, 255, 0), 4);
//-- Show detected matches
imshow("Good Matches & Object detection", imgMatchesMat);
waitKey(0);

return fundemental;
}

Homography Output

Slightly different input scenario (constantly changing things around, would take too long to figure out exact conditions to repeat image above perfectly) but same outcome:

Object (52, 37)
Scene  (219, 151)
Object (49, 47)
Scene  (241,139)
Object (51, 50)
Scene  (242, 141)
Object (37, 53)
Scene  (228, 145)
Object (114, 37.2)
Scene  (281, 162)
Object (48.96, 46.08)
Scene  (216, 160.08)
Object (44.64, 54.72)
Scene  (211.68, 168.48)

Image in question:

enter image description here

Community
  • 1
  • 1
MLMLTL
  • 1,519
  • 5
  • 21
  • 35
  • So you don't want to post your code and you don't want to tell the parameters you used to compute your homography. The matches you show in your images, are they from the robust (ransac) matcher only or are they the inlier from findHomography with ransac? You didn't tell how you combined the two posting's answers. The robust matcher doesn't work with a homography at all, but with a fundamental matrix. – Micka Feb 04 '15 at 15:29
  • ;D - The Homography output is the basic code from the Documentation, given the `DMatch` containing the keypoint locations for all the inliers. I'll share the section of the code. – MLMLTL Feb 04 '15 at 15:39
  • imho your code looks fine (except that you return a fundamental mat and compute the refined fundamental mat without probably using that later). I have no idea why that homography is accepted, because the warped points should all have a distance to their machtes of more than 3 pixels (which should be the limit). Can you try the other way around: match the image to the logo, to see how the homography looks there. Printing the homography might be another hint. – Micka Feb 04 '15 at 17:01
  • Matching from image to logo is a useless endeavor as you would then be matching an entire item of clothing to a logo, i.e. it's trying to find a t-shirt in a logo. Obviously I tried it out. No Dice. (Homography was all over the place, only on images where the size of the logo and size of the same logo on the item of clothing were about the same did it kind of work). – MLMLTL Feb 05 '15 at 10:20
  • obviously you dont want to use matching image to logo in your final version it's just for testing your ill-cased-images homographies. Did you print homographies in both versions? – Micka Feb 05 '15 at 10:48
  • can you add the original canny logo image and the original canny cloth image so that I can try to repeat your results and maybe I can find something. – Micka Feb 05 '15 at 10:54
  • As in the original unedited image? or the image after `Canny`? + adding Homography output now – MLMLTL Feb 05 '15 at 11:02
  • the ones that you use for keypoint detection and description so that I can reproduce the results. So probably the canny images. – Micka Feb 05 '15 at 11:04
  • As I state in the latest Edit, conditions and variables have been changed around so I'll give you the latest `Canny`'ed image used. imgur: http://i.imgur.com/DwaSf5M.jpg – MLMLTL Feb 05 '15 at 11:11
  • can save and upload it as a lossless format (e.g. `.png`) and the logo to detect, too. – Micka Feb 05 '15 at 11:32
  • logo: http://i.imgur.com/Nh8faOw.png clothing: http://i.imgur.com/epCYYda.png – MLMLTL Feb 05 '15 at 11:35
  • what values did you choose for `confidence` and `distance` in your `ransacTest` function? – Micka Feb 05 '15 at 13:17
  • I tried altering the values that came as standard with the OpenCV Cookbook, but ended up leaving them as is - `RobustMatcher() : ratio(0.65f), refineF(true), confidence(0.99), distance(3)` – MLMLTL Feb 05 '15 at 13:23
  • In the new image you posted (slightly different conditions) there are only 4 inlier. Ransac can't tell which one is the best if all models (or two models) have the same number of inlier... – Micka Feb 05 '15 at 13:42
  • I don't get what you mean? Where do you get 4 from? – MLMLTL Feb 05 '15 at 13:44
  • Looking at the picture and running `findHomography` on the points you posted. But the previous images should give more inlier, so that might not be the only problem. – Micka Feb 05 '15 at 13:46
  • You saying im too thorough in my discarding of outliers? – MLMLTL Feb 05 '15 at 13:52
  • I mean that there are so few inlier for the real "good" homography, that a malformed homography (transforming your logo rectangle to a single point or to a line) created from 4 outlier has the same number of inlier. – Micka Feb 05 '15 at 14:03
  • one more thing: In a first sight it looks like those bad homographies occur if either some of the scene- or some of the objectpoints of the matches which are chosen for creating the model are collinear. So OpenCV does not seem to check those points for collinearity, which should always be done and might be a bug in the OpenCV code. – Micka Feb 05 '15 at 14:11
  • _"I mean that there are so few inlier for the real "good" homography"_ - So I am being too thorough in my discarding of outliers? – MLMLTL Feb 05 '15 at 14:28
  • needn't be the case... probably there aren't more "good matches" even before you use the "robust matcher` so it might be more like a problem of your chosen feature detection/description technique (e.g. descriptors of the canny image data might not be very descriptive at all, not sure how ORB descriptors work). – Micka Feb 05 '15 at 14:39
  • did you try ratio-testing like mentioned in http://answers.opencv.org/question/15/how-to-get-good-matches-from-the-orb-feature-detection-algorithm/ ? – Micka Feb 05 '15 at 14:55
  • The reason for `Canny` is to cancel out unnecessary noise, leaving just the outlines of whats important (mostly), in this case, the logo. `ORB` is similar to `Harris` in that it has corner-detection built in (I believe -- _"Rosin’s corner intensity"_) so for outlines of logos, this is perfect. (read that from the top of page 3 in this paper: http://bit.ly/1xqMenf - best link I could find) – MLMLTL Feb 05 '15 at 15:24
  • Comments are not for extended discussion; this conversation has been [moved to chat](http://chat.stackoverflow.com/rooms/70315/discussion-on-question-by-mlmltl-matching-specific-elements-of-an-image-known-s). – Taryn Feb 05 '15 at 15:39
  • keypoint detection is fine for canny image, but probably not the `description` part. The descriptor used by ORB is `BRIEF` which might not be good for edge images, not sure. Maybe seach for a good `edge descriptor` or sth. – Micka Feb 05 '15 at 15:42
  • Seems like a clutching at straws suggestion... as for `Ratio Testing`, already included in the `Robust Matcher`. – MLMLTL Feb 05 '15 at 15:54

0 Answers0