I am trying to figure out how to reduce reads on relatively static firestore collection.
The basics of my data structure:
- users/{userId}
- organistaions/{organisationId}/employees/{employeeId}
Each user will belong to one organisation. Users and employees are not referentially linked, but as more users join an organisation, the number of employee documents can be assumed to be roughly equal to the number of users in that organisation.
The collection of employees will not change often, but on exceptional days, may receive 100s of writes.
When a user opens the app, we will fetch the collection of employees associated with their organisation.
The problem I am facing, is that as the number of users (and hence employees) grow, the number of reads increases at a rate of N2. This is obviously problematic as an organisation with just 1,000 users will result in 1,000,000 reads if each user opens the app only once. Users open the app a dozen times a day, so this number can vary a fair bit.
My initial thinking was that I could fetch the collection of employees in a function and leverage caching on the CDN. This is problematic, though, as I need to share cache results between a large collection of users and there doesn't seem to be a particularly secure way to do this without opening the app up to leak employee collections. I need to vary the result by a user's organistaionId while also verifying their auth token.
I have considered caching the results on the client, but this will only cut the reads down to 1*N2 since each client will still need to fetch the collection at least once.
Other options include Redis or using something like Algolia to search through the results as needed. But both of these solutions seem to get expensive quite quickly.
Thanks in advance.