I'm writing a python script and I want to know how much RAM for the Python process increases after some commands I do. In this way
Memory1 = .... byte
#Some instructions ...
Memory2 = .... byte
What command could I put in my code?
I'm writing a python script and I want to know how much RAM for the Python process increases after some commands I do. In this way
Memory1 = .... byte
#Some instructions ...
Memory2 = .... byte
What command could I put in my code?
There is no in-built way to do this short of making an external system call to get back information about the current process' memory usage, such as reading /proc/meminfo
for the current process directly in Linux.
If you are content with a Unix-only solution in the standard library that can return only the peak resident memory used, you are looking for resource.getrusage(resource.RUSAGE_SELF).ru_maxrss
.
This function returns an object that describes the resources consumed by either the current process or its children...
>>> resource.getrusage(resource.RUSAGE_SELF)
resource.struct_rusage(ru_utime=0.058433,
ru_stime=0.021911999999999997, ru_maxrss=7600, ru_ixrss=0,
ru_idrss=0, ru_isrss=0, ru_minflt=2445, ru_majflt=1, ru_nswap=0,
ru_inblock=256, ru_oublock=0, ru_msgsnd=0, ru_msgrcv=0, ru_nsignals=0,
ru_nvcsw=148, ru_nivcsw=176)
This will not be able to tell you how much memory is being allocated between invocations, but it may be useful to track the growth in peak memory used over the lifetime of an application.
Some Python profilers written in C have been developed to interface directly with CPython that are capable of retrieving information about the total memory used. One example is Heapy, which also possesses graphical plotting abilities.
If you only want to track down the memory consumed by new objects as they have been added to the stack, you can always use sys.getsizeof()
on each new object to get back a running total of space allocated.