0

On Google Cloud Run, I have a Python script that requires a small amount of memory. According to tracemalloc the peak memory is of the order of 20 Mb. Yet, about 1 of 4 runs fails:

"Memory limit of 512 MiB exceeded with 516 MiB used."

Furthermore, in the "Cloud Run Metrics" it says that "Container memory utilization" of all runs is higher than 70%.

What could be the reason? Could this be explained by any memory overhead of Google Cloud Run? If yes, what is the order of magnitude of this memory overhead? What can I do to reduce the memory usage? Any other suggestions or solutions?

Thanks in advance!

Daraan
  • 1,797
  • 13
  • 24
Maxwell86
  • 113
  • 1
  • 1
  • 10
  • After testing with Docker, I probably have the same problem as in following question: https://stackoverflow.com/questions/70881991/memory-leak-after-every-request-hit-on-flask-api-running-in-a-container – Maxwell86 Jan 22 '23 at 14:29

2 Answers2

1

The memory footprint of Cloud Run is not only your Python script, but your whole container. You have to measure the memory footprint of your container to have an idea of the required memory on Cloud Run. If you use Docker, docker stats can help you on that task (on your local environment)

You can also choose smaller base image, remove useless binaries/libraries,... to reduce the memory footprint

guillaume blaquiere
  • 66,369
  • 2
  • 47
  • 76
1

Somehow, with each run, the memory in the container kept growing, hitting the memory limit after a couple of runs.

Finally, I was able to solve it by adding

import gc
...
gc.collect()

In both Docker and Google Cloud Run, after each run, the memory is now cleaned up.

Maxwell86
  • 113
  • 1
  • 1
  • 10