Each Firestore document can contain 1,048,576
bytes of data, a limit which includes not only the number of characters in each field name but in the name of the document itself. Therefore, it's practically impossible for a single document to contain an array with millions of items because there are barely a million available bytes in the document.
A string array named fruits
with two items "kiwi"
and "orange"
consumes 19
bytes by Firestore's measure. Therefore, you could have an array that contained tens, or even hundreds, of thousands of fruits, but not millions. But at this point, you may be better off rethinking your data architecture because Firestore is purpose built for large collections with small documents. And—as far as the writing of this answer—there is no known limit to the size of a collection.
But if you are hellbent on an array with millions of items and you don't care for large documents because you don't want Firestore to bankrupt you on document reads, then you could consider a distributed array, which would simply be other arrays in other documents that spread the load. You could randomly choose an array/document before writing to it or keep a counter that determines which array/document you write to next. I'm not advocating for this kind of solution but it can be done. Whatever you choose, just be aware that Firestore charges ($) per document read and write, so fetching an array with 1,000 items will cost you 1 read, whereas fetching 1,000 documents will cost you 1,000 reads.