0

I'm doing some feature detection/pattern matching based on the OpenCV example code shown at https://docs.opencv.org/3.4/d5/dde/tutorial_feature_description.html

In my example, input2 is an image with a size of 256x256 pixels and input1 is a subimage out of image2 (e.g. with an offset of 5x5 pixels and a size of 200x80 pixels).

Everything works fine: OpenCV detects a larger number of keypoints and descriptors in image2 than in image1 and after matching the two descriptors, "matches" contains exactly as much elements as we had incoming descriptors1.

Until now everything is logical and fits: the number of matches is exactly the number of keypoints/descriptors which are expected in the subimage part.

My problem with this: the elements in "matches" all have a way too back distance value! They all are bigger than 5 (my subimage offset) and most of them are bigger than 256 (the total image size)!

What could be the reason for this?

Thanks!

Update: here you find the image(s) I'm working with:

enter image description here

The whole image is my input2 (don't worry about not having it a size of 256x256 pixels, this is taken out of a screenshot which shows more things). The blue, dashed rectangle in the middle shows my input1 and the circles within this rectangle mark the already detected keypoints1.

The behavior of the code appears to differ between versions 2.x and 3.x.

Christoph Rackwitz
  • 11,317
  • 4
  • 27
  • 36
Elmi
  • 5,899
  • 15
  • 72
  • 143
  • Can you show the image and draw a rectangle st subimage position? – Micka Dec 13 '21 at 06:57
  • @Micka no problem, question updated! – Elmi Dec 13 '21 at 07:05
  • That's not an offset of 5,5 but something with small x and bigger y offset – Micka Dec 13 '21 at 07:14
  • How do you compute the distance value? Is it really the point distance or is it the descriptor distance (matching distance)? – Micka Dec 13 '21 at 07:28
  • @Micka correct, the offset of 5,5 was only an example, the same problem appears with the offset shown in the image. The distances are not calculated, I simply have a look into matches[].distance and see these way too big values. – Elmi Dec 13 '21 at 07:38
  • matches.distance isnt the point position distance: https://stackoverflow.com/questions/16996800/what-does-the-distance-attribute-in-dmatches-mean however, if you are using subimages, I would assume the matching-distance should be very small (if not even 0). If you are using similar images to your sample images, SIFT isnt suited at all I think. For image descriptors you typically need textured regions. – Micka Dec 13 '21 at 07:40
  • if you want to test the point position distance, use `cv::norm(point1 - point2)` where point1 is point from input1 and point2 is the matched point from input2. – Micka Dec 13 '21 at 07:44
  • @Micka OK, so this seems to be a misunderstanding of mine according to this distance value. But what me brought to this assumption: when I - after matching - do a findHomography() according to the example from https://docs.opencv.org/3.4/d7/dff/tutorial_feature_homography.html the resulting matrix contains way too big translatio nvalues and even completely senseless rotations. – Elmi Dec 13 '21 at 08:08
  • can you show an actual (real) result of drawMatches function in your setting? – Micka Dec 13 '21 at 08:24
  • @Micka I compared the results between OpenCV 2.4 and OpenCV 3.4: In only get these big distance-values and the wrong homography-results with 3.4, in 2.4 the distance values are not bigger than 2 with exactly the same images. So I assume something is really broken in 3.4. Sorry to say but under these circumstances I prefer to stay with 2.4.13.6 – Elmi Dec 13 '21 at 13:21
  • can you post your used source code from opencv 2 and opencv 3? – Micka Dec 13 '21 at 13:28

0 Answers0