I have a single gunicorn worker process running to read an enormous excel file which takes up to 5 minutes and uses 4GB of RAM. But after the request was finished processing I noticed at system monitor that it stills allocating 4GB of RAM forever. Any ideas on what to do to release the memory?
Asked
Active
Viewed 5,876 times
7
-
5Short form: Don't worry about it. If it's not actively being used, it'll be swapped out; the virtual memory space remains allocated, but something else will be in physical memory. Many allocators *won't ever* release memory back to the OS -- it just releases it into a pool that application will malloc() from *without needing to ask the OS for more* in the future. – Charles Duffy Aug 15 '18 at 21:50
-
I'm split on whether to flag this as a duplicate of [Releasing memory in Python](https://stackoverflow.com/questions/15455048/releasing-memory-in-python). Whether or not they're duplicates, it's certainly a very pertinent read. – Charles Duffy Aug 15 '18 at 22:57
1 Answers
5
You may try setting the max-requests (N) parameter for a gunicorn worker to indicate to restart the worker after processing N number of requests.
You can refer for more on the max-requests settings here: http://docs.gunicorn.org/en/stable/settings.html

praneeth
- 324
- 2
- 8
-
Thanks for this good hint. I think it is a good idea to force a restart of a worker from time to time. ;) – chAlexey Nov 21 '19 at 10:46