-1

My goal is write 1000 document in different paths(some are in same collection some are not) quick as possible via Admin SDK in Cloud Functions, But don't know If I just put a database.set() in a for loop run 1000 times, Compare to the run batch write twice (Because batch write limitation is 500 document per commit), Is there any different between these two approach? Because I am using Admin SDK and not encount with rules, Don't know if I do the first approach could cause hotspot issue or throttled.

flutroid
  • 1,166
  • 8
  • 24

1 Answers1

1

Firestore has soft limits on writes at 500 per second. However, I would chunk your updates into batches, either by writing your own code, or using something like chunk from lodash (or language equivalent). Purely because I think its good practice to batch writes, to ensure atomic updates. Either all writes in the batch succeed or none do, but it depends on your business logic here as to what is best. However it seems the fastest way to do a large volume of writes is to do individual parallel write operations, see this. To achieve this, you could use something like pMap (or language equivalent) to send x requests at once. Please dont make 1000 serial requests though, it will take ages!

You didnt mention Javascript, so if you're using another language ignore my library suggestions.

TDLR: Batch your writes if its important for your business logic to have atomic writes, if not do individual parallel writes as at that volume of requests you wont run into throttling issues.

omeanwell
  • 1,847
  • 1
  • 10
  • 16