The main problem with counters is this limit:
In Cloud Firestore, you can only update a single document about once per second, which might be too low for some high-traffic applications.
The same documentation page that explains how to use distributed counters to work around this, also shows an example of how to then read the counter totals:
To get the total count, query for all shards and sum their count fields:
function getCount(ref) {
// Sum the count of each shard in the subcollection
return ref.collection('shards').get().then(snapshot => {
let total_count = 0;
snapshot.forEach(doc => {
total_count += doc.data().count;
});
return total_count;
});
}
It also mentions the main disadvantage of this:
Cost - The cost of reading a counter value increases linearly with the number of shards, because the entire shards subcollection must be loaded.
One way to work around this, is to periodically read the counts from the shards and update a master list of counts. This essentially turns the whole exercise into a map-reduce solution. You'll want to run this reduce-code on a schedule, since otherwise, you'd still run into the write rate limit. Using a periodically triggered Cloud Function sounds ideal for this.