1

i just used valgrind to analyze my application for memory leaks, because I had memory (8 GB) overflows for long runtimes. The memory usage increases with runtime.

I just found that this behavior may be intended: Python memory leaks?

Is there any way except spawning new processes to prevent this behavior?

I already tried to use the python garbage collector, no success: How can I explicitly free memory in Python?

I am using Python 2.7.3 ...

with manually triggered garbage collection:

3,145,728 bytes in 1 blocks are possibly lost in loss record 2,715 of 2,715
==16220==    at 0x4C28BED: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==16220==    by 0x463DA4: ??? (in /usr/bin/python2.7)
==16220==    by 0x4A1BB1: PyString_InternInPlace (in /usr/bin/python2.7)
==16220==    by 0x4AAED0: ??? (in /usr/bin/python2.7)
==16220==    by 0x4AAFD6: ??? (in /usr/bin/python2.7)
==16220==    by 0x4AB0C0: ??? (in /usr/bin/python2.7)
==16220==    by 0x4AAFD6: ??? (in /usr/bin/python2.7)
==16220==    by 0x4AB0C0: ??? (in /usr/bin/python2.7)
==16220==    by 0x4AAFD6: ??? (in /usr/bin/python2.7)
==16220==    by 0x4AB0C0: ??? (in /usr/bin/python2.7)
==16220==    by 0x535AE2: PyMarshal_ReadLastObjectFromFile (in /usr/bin/python2.7)
==16220==    by 0x528178: ??? (in /usr/bin/python2.7)
==16220== 
==16220== LEAK SUMMARY:
==16220==    definitely lost: 456 bytes in 10 blocks
==16220==    indirectly lost: 284 bytes in 6 blocks
==16220==      possibly lost: 3,844,678 bytes in 1,533 blocks
==16220==    still reachable: 16,937,271 bytes in 9,558 blocks

without:

==16249== 3,145,728 bytes in 1 blocks are possibly lost in loss record 2,721 of 2,721
==16249==    at 0x4C28BED: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==16249==    by 0x463DA4: ??? (in /usr/bin/python2.7)
==16249==    by 0x4A1BB1: PyString_InternInPlace (in /usr/bin/python2.7)
==16249==    by 0x4AAED0: ??? (in /usr/bin/python2.7)
==16249==    by 0x4AAFD6: ??? (in /usr/bin/python2.7)
==16249==    by 0x4AB0C0: ??? (in /usr/bin/python2.7)
==16249==    by 0x4AAFD6: ??? (in /usr/bin/python2.7)
==16249==    by 0x4AB0C0: ??? (in /usr/bin/python2.7)
==16249==    by 0x4AAFD6: ??? (in /usr/bin/python2.7)
==16249==    by 0x4AB0C0: ??? (in /usr/bin/python2.7)
==16249==    by 0x535AE2: PyMarshal_ReadLastObjectFromFile (in /usr/bin/python2.7)
==16249==    by 0x528178: ??? (in /usr/bin/python2.7)
==16249== 
==16249== LEAK SUMMARY:
==16249==    definitely lost: 456 bytes in 10 blocks
==16249==    indirectly lost: 284 bytes in 6 blocks
==16249==      possibly lost: 3,844,822 bytes in 1,534 blocks
==16249==    still reachable: 16,938,119 bytes in 9,558 blocks

valgrind --tool=massif results in increasing memory usage (see this link for PDF)

Community
  • 1
  • 1
gizzmole
  • 1,437
  • 18
  • 26

1 Answers1

0

I see two likely explanations:

  1. you are inadvertently keeping references to some objects that are no longer required;
  2. you are underestimating the memory footprint of some objects.

The possibility that the interpreter has a memory leak, while not out of the question, is far less likely than either of the above.

P.S. Much as I love valgrind, I don't think it's a terribly useful tool for pinpointing leaks in Python programs.

NPE
  • 486,780
  • 108
  • 951
  • 1,012
  • Are there any tools that could help me to find the objects that are not freed. Because I use custom wrapped C++ code, this interface could also be leaking. How would leaking memory of wrapped C++ objects look like in valgrind? Which tool does cover Python and wrapped C++ code? – gizzmole Nov 14 '13 at 13:43