This can definitely be done with cloud functions.
I don't think you should be hitting memory constraints unless you're trying to query the entire collection at once. Use paginated querying to cap the number of documents read at once and then loop through the pages instead.
Option 1
One quick way to get around the timeout limitation is to use a pub/sub trigger cloud function. When your function is about to timeout, just have it publish to it's own pub/sub topic to trigger itself to run again. But make sure your function stops publishing once there are no longer any more documents to update, otherwise it'll get stuck in an infinite loop.
Option 2
If you need this update task to be preformed extremely quickly, you can use a divide and conquer strategy that looks like this.
[Fn A] =publish=to=> [Pub/Sub] =trigger=> [Fn B], [Fn B], [Fn B], [Fn B] . . .
Cloud function A queries the collection using paginated querying with pages of size N. Publish the uid of the first document on each page and the value of N to a pub/sub topic.
Write a cloud function B that is triggered by that pub/sub topic. It'll read the document uid and number N from pub/sub. It'll use that document uid as a starting point and then update the next N documents. This function will be triggered many times in parallel. Once for each pub/sub publication from Fn A. Your choice of N will influence the number of instances of function B that will spawn.