2

I am working on a fairly large library that continuously allocates and frees memory as it executes. For the past few weeks I have been trying to keep the memory consumption stable, but it appears to be increasing over time. The behavior I can't quite explain is that the increase is not linear. There is a "baseline" memory level that the process hovers at for a while, and then it jumps to a new "baseline". As time passes, the jump entails more and more memory. So let's say memory usage jumped from 512kb to 1024kb after a few hours of operation. It might go from 1024 to 2048 overnight hours. Then it might jump to 4096kb next time. Here is a chart of what memory usage looks like:

chart of memory usage

I have it running on Linux and Valgrind gives it a clean bill of health, if this is relevant at all. I am using the Linux code here to display the virtual memory consumption of my process:

int getValue(){ //Note: this value is in KB!
FILE* file = fopen("/proc/self/status", "r");
int result = -1;
char line[128];

while (fgets(line, 128, file) != NULL){
    if (strncmp(line, "VmSize:", 7) == 0){
        result = parseLine(line);
        break;
    }
}
fclose(file);
return result;
ukyo_rulz
  • 63
  • 5
  • To help with debugging, investigate whether you can reproduce the error quickly, i.e. make a modification to run through your same program running logic (i.e. do the same allocation/deallocations) but much faster – M.M Feb 15 '16 at 01:42
  • the "jumping" is probably caused by your OS allocating memory to processes in chunks – M.M Feb 15 '16 at 01:42
  • Would it make sense for the size of the allocated chunks to increase with time though? – ukyo_rulz Feb 15 '16 at 04:31
  • Yeah, it might use some sort of increasing strategy to avoid lots of small allocations – M.M Feb 15 '16 at 04:34

1 Answers1

4

You are most likely suffering from memory fragmentation.

What happens is that as you release chunks of memory, you get little holes which might not be able to be used next time you request memory. As you continue allocating and releasing in a pattern that creates small unusable holes, the only solution is to get more hunks of memory from the system.

If you have well-defined rules for how your program uses memory for specific things, you might want to consider a memory pool to help allocate and release memory according to the specific requirements of your program, rather than the general-purpose requirements of the standard library.

paddy
  • 60,864
  • 6
  • 61
  • 103
  • Are there any references as to how memory usage would look, if it was indeed being fragmented? I want to confirm whether the increasing size of the additional memory consumption makes sense in that scenario. – ukyo_rulz Feb 15 '16 at 01:22
  • I don't know. You could investigate that yourself. Or you could look at your code and make sure you don't have patterns like this: Allocate 1024 bytes for A; allocate 50 bytes for B; free A; allocate 1024 bytes for C; free B; .... – paddy Feb 15 '16 at 01:26
  • Thanks for the responses. The library code is pretty large and I am sure that it is constantly creating, adding to, clearing and deleting various arrays and vectors. I'll keep this question open while I investigate. – ukyo_rulz Feb 15 '16 at 01:36