2

In a recent project I had to measure memory requirements of different algorithms for comparison. However, I had no control over the memory allocation itself (the algorithms were written in matlab), but it seemed that memory would be allocated only when needed and released early when it wasn't needed anymore by the system. So, the idea to measure memory requirements was as follows:

  1. Get the PID of the running matlab process
  2. Before start of the algorithm read /proc/<PID>/status and parse and record the entry in VmSize
  3. In the inner loop of each algorithm do the same as in 2) and record the difference between VmSize's as the memory consumption of the algorithm

Now my question is: is that a reliable estimator for the memory consumption? Or should I have used a different field (there are quite a few Vm* fields to choose from, but I found VmSize to vary most closely to what I expected)? Note, that I don't need 'byte-accurate' measurments but only some 'over-the-thumb' estimates.

Elmar Zander
  • 1,338
  • 17
  • 32
  • You could also read other files in `/proc/$PID/` in particular `/proc/$PID/maps` and `/proc/$PID/statm`; howeer, `matlab` might not release to the kernel some "unused" memory but keep it for future reuse. Also, consider http://scilab.org/ which is a free software alternative to Matlab. – Basile Starynkevitch Mar 26 '13 at 17:52
  • @BasileStarynkevitch a) `statm` looks ok, since there's less to parse than in `status`, however, it contains the same info, so no gain here. `maps` looks a bit like overkill to me, and I wouldn't know what to use of all that info anyway. b) Might be, but it doesn't look like that (see the 2. sentence in my question. c) I know scilab, but I don't intend to switch. My code for computing tensor solutions to stochastic PDEs is more than 30k lines without comments and blank lines and I don't want to port that. The future for me is the combination Python/numpy/scipy which looks much more promising. – Elmar Zander Mar 26 '13 at 19:49
  • @DarkCthulhu Similar, but there's no solution for me. I can't afford and I don't want to use valgrind on the matlab process. I want to have something that works from inside. And concerning "Use VmSize or something else?" the answers are inconclusive. BTW: I don't care how much memory matlab needs for shared libs or whatever. Only what it allocates additionally for my data structures, which it presumably allocates somewhere on the heap. – Elmar Zander Mar 26 '13 at 19:56

0 Answers0