I would like to log in a production system the current memory usage of a Python script. AWS has Container Insights, but they are extremely well-hidden and I'm not sure how to use them properly within other dashboards / logging- and altering systems. Also, I'm not certain if the log peak memory at all.
The Python script is the production system. It is running on AWS within a Docker container and I ran into issues with a previous approach (link).
tracemalloc seems to be able to to give me the information I want:
# At the start of the script
import tracemalloc
tracemalloc.start()
# script running...
# At the end
current, peak = tracemalloc.get_traced_memory()
logger.info(f"Current memory usage is {current / 10**6} MB")
logger.info(f"Peak memory usage was {peak / 10**6} MB")
tracemalloc.stop()
However, the docs state:
The tracemalloc module is a debug tool
So would it be a bad idea to wrap this around production code? How much overhead is it? Are there other reasons not to use that in production?
(I have a pretty good idea of which parts of the code need most memory and where the peak memory is reached. I want to monitor that part (or maybe rather the size of those few objects / few lines of code). The alternative to tracemalloc seems to be to use something like this)