I am testing a small Bigtable cluster (minimum 3 nodes). I see on the Google console that as the Write QPS level approaches 10K, the CPU utilization approaches the recommended maximum of ~80%.
From what I understand, the QPS metric is for the whole instance, not for each node? In that case, why is the CPU threshold reached while technically the QPS load of the instance is just 1/3 of 30K guidance max? I'm just trying to understand if something is off with my data upload program (done via Dataflow).
Also curious why I never manage to observe anything close to the 30K Writes/sec, but I suspect this is due to the limitations on the Dataflow side, as I'm still restricted to the 8 CPU quote while on trial...