I am working on benchmarking the cassandra cluster and hence using cassandra-stress tool. Able to insert 1M records in one of the table with replication factor as 2, CL as quorum, threads rate as 40, on 2 nodes and running stress from 11.43.600.66.
./cassandra-stress user profile= demo.yaml n=1000000 ops(insert=1,likelyquery0=2) cl= quorum -node 11.43.600.66,11.43.600.65 -rate threads=40
**demo.yaml script:**
columnspec:
- name: user_name
size: gaussian(20..45)
population: gaussian(10000..50000)
- name: system_name
size: gaussian(20..45)
population: gaussian(50..60)
- name: time
size: uniform(15..25)
population: uniform(100000..1000000)
- name: request_uri
size: gaussian(50..80)
population: gaussian(100..150)
insert:
partitions: fixed(1)
select: fixed(1)/1000
batchtype: UNLOGGED
I am trying to understand the results of nodetool cfstats, cfhistograms with that of OpsCenter. The table level metrics of Write and read RequestLatencies (ms/op) from Opscenter are:
cfhistograms results to calculate write and read latency. The latencies are in micro secs
cfstats results in milli secs
a) As per the results of cfhistograms and cfstats
Write Latency: 0.0117ms = 11.7 micros
Read Latency: 0.0943ms = 94.3 micros
This would approximately match the results at 50% as
Write Latency: 10micros
Read Latency: 103micros
Question1: Based on what percentile does cfstats and cfhistograms show the results? I would always consider 95% but for 95% the cfstats results doesn't match with cfhistograms here. Am I considering anything wrong?
b) From OpsCenter results:
Write Latency: 1.6ms/op = 1600 micros
Read Latency: 1.9ms/op = 1900 micros
Question2: Why is the mismatch with the results of cfhistograms and opscenter? Is it like opscenter y-axis values of write,readrequest Latency has to be in micros/op instead of ms/op?
Versions:
Cassandra 2.1.8.689
OpsCenter 5.2.2
Please let me know if I am wrong ..!!
Thanks