7

The question: What solution or tips would you have to deal with a very large (multi terabytes) database indexed on strong hashes with high redundancy?

Some kind of inverted storage?

Is there something that could be done with Postgres?

I am ready to roll my own storage if needed.

(Hint: Must be open source, no Java, must run on Linux, must be disk-based, C/C++/Python preferred)

The details:

I need to create a very large database where each record has:

  • some arbitrary meta data (some text fields) including some primary key
  • one hashes (128 bits hash, strong MD5-like)

The volume of records is what I would qualify as quite large: several 10 to 100's billions). There is a significant redundancy of hashes across rows (over 40% of the records have their hash shared with at least another record, some hash exist in 100K records)

The primary usage is to lookup by hash, then retrieve the metadata. The secondary usage is to lookup by primary key, then retrieve the metadata.

This is an analytics-type database, so the overall load is medium, mostly read, few writes, mostly batched writes.

The current approach is to use Postgres, with an index on the primary key and an index on the hash column. The table is loaded in batch with the index on the hash turned off.

All indexes are btrees. The index on the hash column is growing huge, as big or bigger than the table itself. On a 120 GB table it takes about a day to recreate the index. The query performances are quite good though.

The problem is that the projected size for the target database will be over 4TB based on tests with a smaller data set of 400GB representing about 10% of the total target. Once loaded in Postgres, over 50% of the storage is unfortunately being used by the SQL index on the hash column.

This is way too big. And I feel that the redundancy in hashes is an opportunity for storing less.

Note also that while this describes the problem, there are a few of these tables that needs to be created.

Philippe Ombredanne
  • 2,017
  • 21
  • 36
  • A 128bit hash isn't really crypto grade these days. Have you tried NOT using indexes, but partitioning based on, say, the first 8 bits of the hash? – Tyler Eaves Mar 15 '11 at 14:41
  • @Tyler 128 bits MD5 or truncated SHA1 is decent crypto to me. At least it has a good use of the key range. I tried not using indexes and the lookup performance is terrible. Can you elaborate on the key partitioning? – Philippe Ombredanne Mar 15 '11 at 14:43
  • So use indexes and take the disk space hit. Optimize for speed or space, pick one. – Tyler Eaves Mar 15 '11 at 14:45
  • @Tyler: thx, yet imho there is room for both speed and space optimizations, and frankly at that scale, they start to be tightly linked to each other: less space means more speed – Philippe Ombredanne Mar 15 '11 at 14:53

1 Answers1

5

You could create a table with only id and Hash, and your other data with index, Metadata, and hashId. Doing so, you can prevent writing the same hash up to 100k times in the table.

Tokk
  • 4,499
  • 2
  • 31
  • 47
  • Interesting, simple. It makes sense indeed! – Philippe Ombredanne Mar 15 '11 at 14:50
  • 1
    Now are there some better indexes than btrees for the hash index? – Philippe Ombredanne Mar 15 '11 at 15:05
  • I played with this approach, plus building an inverted storage in postgres (aka key/value table with the value being an array of tuples extended on update with pointers, really a posting list)... these are providing interesting size reduction, yes creation/update times is really slowing down : now I am really erring towards a real and specialized inverted storage container such as zettair, sphinx or xapian instead. – Philippe Ombredanne Mar 30 '11 at 16:20