1

I have a slow-running process that uses some poorly-documented libraries.

I suspect the libraries are killing performance by continuously copying lots memory (in RAM). This hypothesis is support by the fact that perf record/report tells me that memcpy is using 15% of my time.

But I'd like to catch this red-handed, as it were. I believe I could detect this condition if I could get a sense of the amount of memory per time unit that the program is trying to allocate.

Is there are a tool such as gdb or prof which I can use to attach to a running process and get a sense of its malloc/free statistics?

Richard
  • 56,349
  • 34
  • 180
  • 251
  • 1
    `valgrind` should give you stats on how much memory was allocated. – Ajay Brahmakshatriya May 03 '17 at 06:59
  • 2
    Malloc and free are defined as weak symbols, so you can replace them with with custom versions that provide tracing http://stackoverflow.com/questions/17803456/an-alternative-for-the-deprecated-malloc-hook-functionality-of-glibc – Lanting May 03 '17 at 07:15
  • 1
    Just another question where double-tagging wrecks the question. In C++, you can replace `operator new`. If you're dealing with C, theoretically you can't replace `malloc`. (But in practice, you might). – MSalters May 03 '17 at 07:36
  • And `valgrind` in `callgrind` or `cachegrind` mode can also give exact information of function call stacks (chains - which functions called memcpy) with some information like "profiling" (with in-order 1-cycle cost model of every instruction, unlike real out-of-order cpu model) – osgx May 03 '17 at 14:22
  • @MSalters: Just so. Though perhaps double-tagging is more appropriate in this instance: the C++ code is interfacing with libraries written in C. – Richard May 03 '17 at 17:04
  • @Richard: Important information. So important in fact that it should be in the question. – MSalters May 03 '17 at 19:44

0 Answers0