11

There are some surprisingly good image compare tools which find similar image even if it's not exactly the same (eg. change in size, wallpaper, brightness/contrast). I have some example applications here:

I only tried the first one, but all of them are developed for Windows and are not open source. Unique Filer was released in 2000 and the homepage seems to have disappeared. It was surprisingly fast (even on computers from that year) because it used an index and comparing some 10000 images using the index needed only some few seconds (and updating the index was a scalable process).

Since this algorithm in a very effective form already exists for at least 15 years, I assume it is well-documented and possibly already implemented as an open source library. Does anyone knows more about which algorithm or image detection theory was used to implement this applications? Maybe there is even a open source implementation of it available?

I already checked the question Algorithm for finding similar images but all of it's answers solve the problem by comparing one image to another. For 1000+ images this will result in 1000^2 comparing operations which is just not what I'm looking for.

Community
  • 1
  • 1
Daniel Alder
  • 5,031
  • 2
  • 45
  • 55
  • Obviously, here are still people who upvote my question. Thanks you all. But all stackoverflow does is marking my question as a duplicate to a question which doesn't answer my question (index-based algorithm!). I was hoping to get better help from stackoverflow :-( – Daniel Alder Aug 12 '14 at 15:36
  • 1
    The newest answer on that "dup" is 3 years old, please stop being trigger happy with closing valid questions. – OneOfOne Aug 13 '14 at 03:55
  • @OneOfOne *That's* exactly why you can award bounties on questions. If you want new answers to an already asked question just add a bounty with the correct reason and comment on the answers to point out whether they are outdated or to ask the answerers if, in the meantime, some better solution is available. – Bakuriu Aug 13 '14 at 11:24

2 Answers2

2

The problem you are describing is more generally called Nearest Neighbor Search. Since you are asking for high efficiency on large datasets, Approximated Nearest Neighbor Search is what you are after.

An efficient technique for this is Locality-Sensitive Hashing (LSH), for which these slides give a great overview. Its basic idea is the use of hashing functions which project all data to a low-dimensional space, with the constraint that the hash of similar data collides with a high probability and dissimilar data collides with low probability. These probabilities are parameters to the algorithm, with which the trade-off between accuracy and efficiency can be changed.

LSHKIT is an open-source implementation of LSH.

Florian von Stosch
  • 1,700
  • 1
  • 14
  • 22
  • Unfortunately the magic and hard part is in finding a real world useable hash function for images. – Lothar Mar 10 '19 at 20:00
2

Meanwhile, I analyzed the algorithm of UniqueFiler:

size reduction

First, it reduces all images to 10x10 pixel grayscale images (likely without using interpolation)

rotation

Probably based on the brightness of the 4 quadrants, some rotation is done (this step is dangerous because it sometimes 'overlooks' similarities if images are too symmetric)

range reduction

The image brightness range is fully extended (brightest -> white, darkest -> black) and then reduced to 2 bit (4 values) per pixel

database

The values get stored as arrays of 100 bytes per image (plus file metadata)

comparison

... is done one-by-one (two nested loops over the whole database plus a third for the 100 bytes). Today, we would probably index the sorted sums of all 4 quadrants for a fast pre-selection of similar candidates.

matcher

The comparison is done byte-by-byte by difference between each two bytes, weighted but less than the square. The sum of these 100 results is the final difference between two images.

I have more detailed information a home. If I find the time, I will add them to this answer. I found this after I discovered that the database format is actually a gzipped file without header, containing fixed-sized records per image

Daniel Alder
  • 5,031
  • 2
  • 45
  • 55