One the official site of leveldb(http://code.google.com/p/leveldb/), there is a performance report. I pasted as below.
Below is from official leveldb benchmark
Here is a performance report (with explanations) from the run of the included db_bench program. The results are somewhat noisy, but should be enough to get a ballpark performance estimate.
Setup
We use a database with a million entries. Each entry has a 16 byte key, and a 100 byte value. Values used by the benchmark compress to about half their original size. LevelDB: version 1.1
CPU: 4 x Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz
CPUCache: 4096 KB
Keys: 16 bytes each
Values: 100 bytes each (50 bytes after compression)
Entries: 1000000
Raw Size: 110.6 MB (estimated)
File Size: 62.9 MB (estimated)
Write performance
The "fill" benchmarks create a brand new database, in either sequential, or random order.
The "fillsync" benchmark flushes data from the operating system to the disk after every operation; the other write operations leave the data sitting in the operating system buffer cache for a while. The "overwrite" benchmark does random writes that update existing keys in the database.
fillseq : 1.765 micros/op; 62.7 MB/s
fillsync : 268.409 micros/op; 0.4 MB/s (10000 ops)
fillrandom : 2.460 micros/op; 45.0 MB/s
overwrite : 2.380 micros/op; 46.5 MB/s
Each "op" above corresponds to a write of a single key/value pair. I.e., a random write benchmark goes at approximately 400,000 writes per second.
Below is from My leveldb benchmark
I did some benchmark for leveldb but got write speed 100 times less than the report.
Here is my experiment settings:
- CPU: Intel Core2 Duo T6670 2.20GHz
- 3.0GB memory
- 32-bit Windows 7
- without compression
- options.write_buffer_size = 100MB
- options.block_cache = 640MB
What I did is very simple: I just put 2 million {key, value} and no reads at all. The key is a byte array which has 20 random bytes and the value is a byte array too with 100 random bytes. I constantly put newly random {key, value} for 2 million times, without any operation else.
In my experiment, I can see that the speed of writing decreases from the very beginning. The instant speed (measuring the speed of every 1024 writes) swings between 50/s to 10, 000/s. And my overall average speed of writes for 2 million pairs is around 3,000/s. The peak speed of writes is 10, 000/s.
As the report claimed that the speed of writes can be 400, 000/s, the write speed of my benchmark is 40 to 130 times slower and I am just wondering what's wrong with my benchmark.
I don't need to paste my testing codes here as it is super easy, I just have a while loop for 2 million times, and inside the loop, for every iteration, I generate a 20 bytes of key, and 100 bytes of value, and then put them to the leveldb database. I also measured the time spent on {key, value} generation, it costs 0 ms.
Can anyone help me with this? How can I achieve 400, 000/s writes speed with leveldb? What settings I should improve to?
Thanks
Moreover
I just ran the official db_bench.cc on my machie. It is 28 times slower than the report.
I think as I used their own benchmark program, the only difference between my benchmark and theirs is the machine.