6

I often keep track of duplicates with something like this:

processed = set() 
for big_string in strings_generator:
    if big_string not in processed:
        processed.add(big_string)
        process(big_string)

I am dealing with massive amounts of data so don't want to maintain the processed set in memory. I have a version that uses sqlite to store the data on disk, but then this process runs much slower.

To cut down on memory use what do you think of using hashes like this:

processed = set() 
for big_string in string_generator:
    key = hash(big_string)
    if key not in ignored:
        processed.add(key)
        process(big_string)    

The drawback is I could lose data through occasional hash collisions. 1 collision in 1 billion hashes would not be a problem for my use.

I tried the md5 hash but found generating the hashes became a bottleneck.

What would you suggest instead?

hoju
  • 28,392
  • 37
  • 134
  • 178
  • 1
    Well, you know your options. Either you keep the object alive so it can tell the `set` if the current item is indeed itself or just produces the same hash, or you truncate these objects to hashes and get unrecoverable hash collisions. There's no way to avoid this without knowing way too much about the objects' internals. –  Dec 12 '10 at 22:19
  • 2
    if you are using md5, collision is negligible (smaller then memory fault due to cosmic rays). – Paulo Scardine Dec 12 '10 at 22:27
  • +1 to Paulo: With any modern hash, you'll run out of memory for your set before you're likely to encounter a chance hash collision. – Thomas K Dec 12 '10 at 23:28
  • I found calculating the md5 hash adds too much overhead - what alternative would you recommend? – hoju Dec 04 '11 at 11:58

10 Answers10

4

I'm going to assume you are hashing web pages. You have to hash at most 55 billion web pages (and that measure almost certainly overlooks some overlap).

You are willing to accept a less than one in a billion chance of collision, which means that if we look at a hash function which number of collisions is close to what we would get if the hash was truly random[ˆ1], we want a hash range of size (55*10ˆ9)*10ˆ9. That is log2((55*10ˆ9)*10ˆ9) = 66 bits.

[ˆ1]: since the hash can be considered to be chosen at random for this purpose, p(collision) = (occupied range)/(total range)

Since there is a speed issue, but no real cryptographic concern, we can use a > 66-bits non-cryptographic hash with the nice collision distribution property outlined above.

It looks like we are looking for the 128-bit version of the Murmur3 hash. People have been reporting speed increases upwards of 12x comparing Murmur3_128 to MD5 on a 64-bit machine. You can use this library to do your speed tests. See also this related answer, which:

  • shows speed test results in the range of python's str_hash, which speed you have already deemed acceptable elsewhere – though python's hash is a 32-bit hash leaving you only 2ˆ32/(10ˆ9) (that is only 4) values stored with a less than one in a billion chance of collision.
  • spawned a library of python bindings that you should be able to use directly.

Finally, I hope to have outlined the reasoning that could allow you to compare with other functions of varied size should you feel the need for it (e.g. if you up your collision tolerance, if the size of your indexed set is smaller than the whole Internet, etc, ...).

Community
  • 1
  • 1
Francois G
  • 11,957
  • 54
  • 59
2

You have to decide which is more important: space or time.

If time, then you need to create unique representations of your large_item which take as little space as possible (probably some str value) that is easy (i.e. quick) to calculate and will not have collisions, and store them in a set.

If space, find the quickest disk-backed solution you can and store the smallest possible unique value that will identify a large_item.

So either way, you want small unique identifiers -- depending on the nature of large_item this may be a big win, or not possible.

Update

they are strings of html content

Perhaps a hybrid solution then: Keep a set in memory of the normal Python hash, while also keeping the actual html content on disk, keyed by that hash; when you check to see if the current large_item is in the set and get a positive, double-check with the disk-backed solution to see if it's a real hit or not, then skip or process as appropriate. Something like this:

import dbf
on_disk = dbf.Table('/tmp/processed_items', 'hash N(17,0); value M')
index = on_disk.create_index(lambda rec: rec.hash)

fast_check = set()
def slow_check(hashed, item):
    matches = on_disk.search((hashed,))
    for record in matches:
        if item == record.value:
            return True
    return False

for large_item in many_items:
    hashed = hash(large_item) # only calculate once
    if hashed not in fast_check or not slow_check(hashed, large_item):
        on_disk.append((hashed, large_item))
        fast_check.add(hashed)
        process(large_item)    

FYI: dbf is a module I wrote which you can find on PyPI

Ethan Furman
  • 63,992
  • 20
  • 159
  • 237
1

If many_items already resides in memory, you are not creating another copy of the large_item. You are just storing a reference to it in the ignored set.

If many_items is a file or some other generator, you'll have to look at other alternatives.

Eg if many_items is a file, perhaps you can store a pointer to the item in the file instead of the actual item

John La Rooy
  • 295,403
  • 53
  • 369
  • 502
  • @user470379: Unless `many_items` is a generator, it keep the items alive as well... –  Dec 12 '10 at 22:26
1

As you have already seen few options but unfortunately none of them can fully address the situation partly because

  1. Memory Constraint, to store entire object in memory
  2. No perfect Hash function, and for huge data set change of collision is there.
  3. Better Hash functions (md5) are slower
  4. Use of database like sqlite would actually make things slower

As I read this following excerpt

I have a version that uses sqlite to store the data on disk, but then this process runs much slower.

I feel if you work on this, it might help you marginally. Here how it should be

  1. Use tmpfs to create a ramdisk. tmpfs has several advantages over other implementation because it supports swapping of less-used space to swap space.
  2. Store the sqlite database on the ramdisk.
  3. Change the size of the ramdisk and profile your code to check your performance.

I suppose you already have a working code to save your data in sqllite. You only need to define a tmpfs and use the path to store your database.

Caveat: This is a linux only solution

Abhijit
  • 62,056
  • 18
  • 131
  • 204
1

a bloom filter? http://en.wikipedia.org/wiki/Bloom_filter

Samus_
  • 2,903
  • 1
  • 23
  • 22
0

well you can always decorate large_item with a processed flag. Or something similar.

Kugel
  • 19,354
  • 16
  • 71
  • 103
  • That works if many_items can throw up several references to a common object. But the usual situation is several objects with the same value. – Thomas K Dec 12 '10 at 23:19
0

You can give a try to the str type __hash__ function.

In [1]: hash('http://stackoverflow.com')
Out[1]: -5768830964305142685 

It's definitely not a cryptographic hash function, but with a little chance you won't have too much collision. It works as described here: http://effbot.org/zone/python-hash.htm.

mutantacule
  • 6,913
  • 1
  • 25
  • 39
0

I suggest you profile standard Python hash functions and choose the fastest: they are all "safe" against collisions enough for your application.

Here are some benchmarks for hash, md5 and sha1:

In [37]: very_long_string = 'x' * 1000000
In [39]: %timeit hash(very_long_string)
10000000 loops, best of 3: 86 ns per loop

In [40]: from hashlib import md5, sha1

In [42]: %timeit md5(very_long_string).hexdigest()
100 loops, best of 3: 2.01 ms per loop

In [43]: %timeit sha1(very_long_string).hexdigest()
100 loops, best of 3: 2.54 ms per loop

md5 and sha1 are comparable in speed. hash is 20k times faster for this string and it does not seem to depend much on the size of the string itself.

lbolla
  • 5,387
  • 1
  • 22
  • 35
  • hash() would be efficient enough, but how often then would collisions be expected? – hoju Dec 27 '11 at 14:46
  • `hash` is 32 or 64 bits integer, depending on your architecture. Taking into account that `hash` is used to implement `dict` and `set`, it must be good enough for you too ;-) – lbolla Dec 28 '11 at 10:12
0

how does your sql lite version work? If you insert all your strings into a database table and then run the query "select distinct big_string from table_name", the database should optimize it for you.

Another option for you would be to use hadoop.

Another option could be to split the strings into partitions such that each partition is small enough to fit in memory. then you only need to check for duplicates within each partition. the formula you use to decide the partition will choose the same partition for each duplicate. the easiest way is to just look at the first few digits of the string e.g.:

d=defaultdict(int)
for big_string in strings_generator:
    d[big_string[:4]]+=1
print d

now you can decide on your partitions, go through the generator again and write each big_string to a file that has the start of the big_string in the filename. Now you could just use your original method on each file and just loop through all the files

Rusty Rob
  • 16,489
  • 8
  • 100
  • 116
0

This can be achieved much more easily by performing simpler checks first, then investigating these cases with more elaborate checks. The example below contains extracts of your code, but it is performing the checks on much smaller sets of data. It does this by first matching on a simple case that is cheap to check. And if you find that a (filesize, checksum) pairs are not discriminating enough you can easily change it for a more cheap, yet vigorous check.

# Need to define the following functions
def GetFileSize(filename):
    pass
def GenerateChecksum(filename):
    pass
def LoadBigString(filename):
    pass

# Returns a list of duplicates pairs.
def GetDuplicates(filename_list):
    duplicates = list()
    # Stores arrays of filename, mapping a quick hash to a list of filenames.
    filename_lists_by_quick_checks = dict()
    for filename in filename_list:
        quickcheck = GetQuickCheck(filename)
        if not filename_lists_by_quick_checks.has_key(quickcheck):
            filename_lists_by_quick_checks[quickcheck] = list()
        filename_lists_by_quick_checks[quickcheck].append(filename)
    for quickcheck, filename_list in filename_lists.iteritems():
        big_strings = GetBigStrings(filename_list)
        duplicates.extend(GetBigstringDuplicates(big_strings))
    return duplicates

def GetBigstringDuplicates(strings_generator):
    processed = set()
    for big_string in strings_generator:
        if big_sring not in processed:
            processed.add(big_string)
            process(big_string)

# Returns a tuple containing (filesize, checksum).
def GetQuickCheck(filename):
    return (GetFileSize(filename), GenerateChecksum(filename))

# Returns a list of big_strings from a list of filenames.
def GetBigStrings(file_list):
    big_strings = list()
    for filename in file_list:
        big_strings.append(LoadBigString(filename))
    return big_strings
Keldon Alleyne
  • 2,103
  • 16
  • 23