3

I'm trying to develop algorithm, which returns similarity score for two given black and white images: original one and its sketch, drawn by human:

enter image description here

All original images has the same style, but there is no any given limited set of them. Their content could be totally different.

I've tried few approaches, but none of them was successful yet:

OpenCV template matching

OpenCV matchTemplate is not able to calculate similarity score of images. It could only tells me count of matched pixels, and this value is usually quite low, because of not ideal proportions of human's sketch.

OpenCV feature matching

I've failed with this method, because I couldn't find good algorithms for extracting significant features from human's sketch. Algorithms from OpenCV's tutorials are good in extracting corners and blobs as features. But here, in sketches, we have a lot of strokes - each of them produces a lot of insignificant, junk features and leads to fuzzy results.

Neural Network Classification

Also I took a look at neural networks - they are good in image classification, but also they need train sets for each of classes, and this part is impossible, because we have an unlimited set of possible images.

Which methods and algorithms would you use for this kind of task?

GôTô
  • 7,974
  • 3
  • 32
  • 43
landyrev
  • 143
  • 1
  • 7
  • have a look at chamfer matching. You'll need to find ways to make it scale- and rotational invariant. Other shape matching algorithms might be good, too. And have a look at active contours! – Micka Feb 17 '17 at 10:27
  • How about crosscorrelation? You can get best matching pixel position, then you can align the images and calculate the similarity score – smttsp Feb 17 '17 at 13:57
  • How about 'cosine - similarity', given the fact that they are binary images? – Jeru Luke Feb 17 '17 at 14:10

1 Answers1

4

METHOD 1

Cosine similarity gives a similarity score ranging between (0 - 1).

I first converted the images to gray scale and binarized them. I cropped the original image to half the size and excluded the text as shown below:

enter image description here

enter image description here

I then converted the image arrays to 1D arrays using flatten(). I used the following to compute cosine similarity:

from scipy import spatial
result = spatial.distance.cosine(im2, im1)
print result

The result I obtained was 0.999999988431, meaning the images are similar to each other by this score.

EDIT

METHOD 2

I had the time to check out another solution. I figured out that OpenCV's cv2.matchTemplate() function performs the same job.

I f you check out THIS DOCUMENTATION PAGE you will come across the different parameters used.

I used the cv2.TM_SQDIFF_NORMED parameter (which gives the normalized square difference between the two images).

    res = cv2.matchTemplate(th1, th2, cv2.TM_SQDIFF_NORMED)
    print 1 - res

For the given images I obtained a similarity score of: 0.89689457

Community
  • 1
  • 1
Jeru Luke
  • 20,118
  • 13
  • 80
  • 87