We own a corporate level forum which is developed using python (with Django framework). We are intermittently observing memory usage spikes on our production setup and we wish to track the cause.
The occurrences of these incidences are random and not directly related to the load (as per current study).
I have browsed a lot on internet and especially stackoverflow for some suggestions and was not able to get any similar situation.
Yes, I was able to locate a lot of profiler utils like Python memory profiler but these require some code level inclusion of these modules and as this happens to be in production profiler are not a great help (we plan to review our implementation in the next release).
We wish to review this issue based on occurrence.
Thus I wish to check whether there is any tool that we can use to create a dump for analysis offline (just like heapdumps in java).
Any pointers? Is gdb the only option?
OS: Linux Python: 2.7 (currently we do not plan to upgrade until that can help in fixing this issue)
Cheers!
AJ