5

Since the below got a bit long: Here's the tl;dr; version: Is there an existing key/value best-practice for fast key and value lookup, something like a hash-based set with persistent indices?

I'm interested in the world of key-value databases and have so far failed to figure out how one would efficiently implement the following use-case:

Assume we want to serialize some data and reference them somewhere else by a persistent, unique integer index. Thus e.g.: Key = unsigned int, Value = MyData.

The database should have fast key lookup and ensure that MyData is unique.

Now, when I insert a new value into my the database, I could assign it a new index key, e.g. the current size of the database or to prevent clashes after removing items, I could keep some counter externally.

But how would I ensure that I do not insert the same MyData value into my database? So far, it looks to me as if this is not efficiently possible with key-value databases - is this correct? I.e. I do not want to iterate over the whole database just to ensure MyData value is not in there already...

What is the best pratice to implement this, then?

For background: I work on KDevelop where we use the above for our code analysis cache. We actually have a custom implementation of the above use-case 1. Search for Bucket and ItemRepository if you are interested in the internals, and see 2 for an examplatory usage of the ItemRepository.

But you will probably agree, that this code is quite hard to understand and thus hard to maintain. I want to compare its performance to alternative solutions which might result in simpler code - but only if it does not incur a severe performance penalty. Considering the hype around the performance of key-value storages such as OpenLDAP MDB, Kyoto Cabinet and LevelDB, this is where I wanted to start.

What we have in KDevelop - as far as I figured out - is basically a sort of hybrid on-disk/in-memory hash map which gets saved to disk periodically (which of course can result in major data corruption in case of crashes etc.). Items are stored in a location based on their hash value which then of course also allows relatively fast value lookups as long as the hash function is fast. The added twist is that you also get some sort of persistent database index which can be used to lookup the items quite efficiently.

So - long story short - how would one do that with a key/value database such as LevelDB, Kyoto Cabinet, OpenLDAP MDB - you name it?

milianw
  • 5,164
  • 2
  • 37
  • 41

3 Answers3

3

Unless I'm missing something here - typically your hash algorithm is consistent and will provide the same key for the same data. Thus you should only need to look up the key to see if it already exists, or handle the (likely duplicate key) error the DB gives back to you.

afaik Key/Value DBs can and will enforce a unique Value constraint for you i.e. you will get an error if you try and save a value that already exists.

justinmvieira
  • 572
  • 4
  • 13
  • Yes, hash algorithms are consistent but not unique for arbitrary values which breaks your reasoning, no? To elaborate: What would you use as a key for an arbitrary value you want to insert? It must be unique for a given value and relatively fast to lookup (i.e. no O(N) algorithm to iterate over all values to find a fitting key). – milianw Dec 27 '12 at 14:14
  • You could use a non-arbitrary key then. Like a sequential number that you keep track of the largest value, or a string descriptor like "FirstKey" "SecondKey" "ImportantAppKey" or similar... right? If you use a sequential number you could even save the "top" to the DB - if you have 100 "buckets", DBNAME.top = 100. Or similar. – justinmvieira Dec 27 '12 at 20:42
  • 1
    Here is an example with OrientDB http://code.google.com/p/orient/wiki/Indexes - Perhaps your NOSQL db could generate the indices automatically and use those instead of or in addition to arbitrary keys? – justinmvieira Dec 27 '12 at 20:48
  • 1
    @milianw, no, it doesn't break the reasoning: if another MyData was already added, it will be added with the same key, which means that you only need to check the value assigned to the existing key (if the key already exists) and compare that old value with your new value, if they agree then it already exists – lurscher Dec 27 '12 at 23:35
  • vdoogs: I fail to come up with a fiting non-arbitrary key. The sequential number fails when you insert the same value twice, since at that point I cannot use the largest value but have to find the value I used before. – milianw Dec 28 '12 at 16:53
  • 1
    The OrientDB example looks very interesting though, this is exactly what I was looking for. – milianw Dec 28 '12 at 16:53
  • @lurscher: The question is how to get a fitting key for an arbitrary MyData. If I use the hash it's non-unique and can thus not be used as a unique index to reference the data from other places. – milianw Dec 28 '12 at 16:55
  • If the example is exactly what you are looking for, can you accept my answer? I am more than willing to try and help further through these comments. – justinmvieira Dec 28 '12 at 18:57
  • Also I think you may have misunderstood me - I realize I didn't make it perfectly clear in my post - I was suggesting you use a numeric index in _addition_ to arbitrary mydata and arbitrary key. That way you have a non-arbitrary and unique "Index", as well as the arbitrary data and hash. I'm pretty sure that is what the OrientDB example is showing. – justinmvieira Dec 28 '12 at 18:58
3

Sounds like you want to do what OpenLDAP does with its Equality index. Perhaps this is the same as the OrientDB example, I didn't read it.

The main table is indexed by a monotonically increasing integer key (called the entryID), and stores the data value. The equality index is indexed by a hash of the value, and stores a list of entryIDs that match the hash. Since the hash might have collisions, just the existence of an entry in the equality index doesn't prove uniqueness or duplication. You still need to check the actual values.

A faster/simpler approach, if you're using MDB, BDB, or some other database that supports duplicate keys, is to just keep one table, using the hash as the key. In both MDB and BDB there is a GET_BOTH request which matches both the key and the data to perform a fetch. If it succeeds then you know for certain that the value already exists. Otherwise, it allows you to save whatever data values and not worry whether or not there are hash collisions.

A caveat here, in MDB using duplicate keys, the size of the values is limited to less than one half of a disk page.

hyc
  • 1,387
  • 8
  • 22
0

How big are your value strings?

I would just store them in a key and let the database do all the work.

Typical LevelDB style, which applies to most KV stores, would be to use a pair of keys, prefixed to indicate type

eg:

Key = 'i' + ID 
Value = valueString

Key = 'v' + valueString
Value = ID

In a system that needs to allow for multiple identical valueStrings you would move the ID into the tail of the second key

Key = 'v' + valueString + ID
Value = empty
Andy Dent
  • 17,578
  • 6
  • 88
  • 115