There are two different scenarios for this
- Single-cluster Bigtable instance
- Multi-cluster Bigtable instance
In (1) let's say you have 10 nodes in your cluster, each row will be assigned to one of those nodes (assume 1/10th on each for simplicity). This setup offers strong consistency but if one of the nodes dies, until another node picks up the rows it was managing, those 1/10th of the rows become unavailable. A new node spawns very quickly and takes ownership or the rows are redistributed across remaining nodes so this is hardly noticeable and there is no data loss since Bigtable writes to Google's Colossus distributed file system, not to attached disks on individual nodes.
In (2) there are multiple clusters which could be thousands of miles away and data is getting replicated between them. Bigtable supports multi-primary setups so they can all receive writes and replication is eventually consistent i.e. in this case Bigtable favors lower latency over consistency. In a failure case for this particular setup, with a multi-cluster routing policy, there will be automatic fail-over but it could be such that not all changes in cluster A made it to cluster B yet. However cluster B will remain available and continue serving data even though it is not consistent.
So trade-offs vary depending on the setup.