0

I found this GitHub reference to measure the write performance of Bigtable- https://github.com/GoogleCloudPlatform/cloud-bigtable-examples/tree/master/java/simple-performance-test

As per the official documentation, we expect the write performance to match up to 10K/sec for a Bigtable instance having a single node and SSD storage. However, On an average I'm getting 35 QPS of write performance for the same configuration. Is it unusual?

I'm running my benchmarking on 1 million rows (1 KB per row). Modified the source code as well to generate 1 million different values as originally this code generates a single value and feeds the same to Bigtable for each row. Please note that monitoring console never shows anything > 15 QPS. Any specific reason of this variance between what I see on console and what I see while executing the performance testing utility?

This Stack Overview reference suggests that the performance I see mightn't be unusual- Google Bigtable performance: QPS vs CPU utilization

Is there any other way or utility which can help me benchmarking Bigtable write, read and scan performance?

Balajee Venkatesh
  • 1,041
  • 2
  • 18
  • 39

1 Answers1

1

Cloud Bigtable performance is highly dependent on workload, schema design, and dataset characteristics. The performance numbers shown in this documentation page are estimates only.

I recommend you to read this full documentation which covers the causes of slower performance, testing recommendations and a troubleshooting section for performance issues.

In addition you can use the Cloud Bigtable loadtest tool, written in Go, as a starting point for developing your own performance test.

llompalles
  • 3,072
  • 11
  • 20
  • Is this load testing tool still compatible with Go? I'm having hard time fixing "use of internal package not allowed" error. This code has 'cbtconfig' and 'stat' packages which are considered to be the internal ones of Bigtable. Any idea how do I build and run the code? I have no experience with Go. So, I would need some help with the steps to get this tool working. I can take care of the required modifications thereafter. Thanks for the support. – Balajee Venkatesh Jun 10 '20 at 03:09
  • 1
    You need to clone the whole repository `https://github.com/googleapis/google-cloud-go` and then `cd google-cloud-go/bigtable/cmd/loadtest/`, there you will be able to run compile with `go build loadtest.go`. The cause of the error you are getting is explained in [this answer](https://stackoverflow.com/a/59342483/7757976) – llompalles Jun 10 '20 at 10:49
  • name,count,errors,min,median,max,p75,p90,p95,p99 reads,19603,0,2.737824ms,10.704144ms,71.403716ms,14.300209ms,19.709184ms,24.419084ms,35.385176ms writes,19399,0,3.042753ms,11.666074ms,71.697233ms,15.254157ms,20.895185ms,25.49103ms,36.285148ms – Balajee Venkatesh Jun 10 '20 at 11:28
  • This test runs for 5 seconds. So, if I understand correctly it represents 4K rows per second of read/write throughput with an average latency of approx. 10 ms. Is this understanding correct? – Balajee Venkatesh Jun 10 '20 at 11:30
  • 1
    Looks like that – llompalles Jun 11 '20 at 07:35