4

In LSH, you hash slices of the documents into buckets. The idea is that these documents that fell into the same buckets will be potentially similar, thus a nearest neighbor, possibly.

For 40.000 documents, what is a good value (pretty much) for the number of buckets?

I have it as: number_of_buckets = 40.000/4 now, but I feel it can be reduced more.

Any ideas, please?


Relative: How to hash vectors into buckets in Locality Sensitive Hashing (using jaccard distance)?

Community
  • 1
  • 1
gsamaras
  • 71,951
  • 46
  • 188
  • 305
  • 1
    It is a bit unclear what did you actually hash in your earlier question: "documents as columns and words as rows [..] every stripe has its columns hashed, so that a column falls in a bucket. If two columns fall in the same bucket, for >= 1 stripes, then they are potentially similar". Anyway, a common starting point is to use `sqrt(n)` buckets for `n` documents. You can try doubling and halving that and run some analysis to see what kind of document distributions you got. – NikoNyrh May 16 '16 at 14:25

2 Answers2

1

A common starting point is to use sqrt(n) buckets for n documents. You can try doubling and halving that and run some analysis to see what kind of document distributions you got. Naturally any other exponent can be tried as well, and even K * log(n) if you expect that the number of distinct clusters grows "slowly".

I don't think this is an exact science yet, belongs on the similar topic as choosing the optimal k for k-means clustering.

NikoNyrh
  • 3,578
  • 2
  • 18
  • 32
1

I think it should be at least n. If it is less than that, let's say n/2, you ensure that for all bands, each document will have at least 1 possible similar document on average, due to collisions. So, your complexity when calculating the similarities will be at least O(n).

On the other hand, you will have to pass through the buckets at least K times, so that is O(K*B), being B your buckets. I believe the latter is faster, because it is just iterating over your data structure (namely a Dictionary of some kind) and counting the number of documents that hashed to each bucket.

ledermauss
  • 307
  • 1
  • 2
  • 13