The table have two fields: id, content, and only one Primary key(id).
The field id type is bigint. The field content type is TEXT, for this field is var-lenth, maybe some record will be 20k, and average length of record is 3k.
Table schema:
CREATE TABLE `events` (
`eventId` bigint(20) NOT NULL DEFAULT '0',
`content` text,
PRIMARY KEY (`eventId`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
It's just used as a Key-Value storage.
My Test result is:
InnoDB: 2200 records/second
TokuDB: 1300 records/second
BDB-JE: 12000 records/second
LevelDB-JNI: 22000 records/second(not stable, need test again)
the result is very very bad.
Is 3K too big for tokuDB?
In My Application, there many insert(>2000 records/second, about 100M records/day), and rare update/delete.
TokuDB version: mysql-5.1.52-tokudb-5.0.6-36394-linux-x86_64-glibc23.tar.gz
InnoDB version: mysql 5.1.34
OS: CentOS 5.4 x86_64
One reason that we choose InnoDB/TokuDB is we need partition support and maintenance friendly. Maybe I will try LevelDB or other Key-Value storage? any sugguest will welcome.
===========
Thanks everybody, finally test performance of TokuDB and InnoDB both not good enough for our use case.
Now we have using solution like bitcask as our storage. Bitcask append-only style write performance is much better than what we expect. We just need to handle the memory problem about the hash index.