I have C++ process that ingests large blocks of data and stores them in memory. The storage array contains roughly 10 GB of data partitioned into 4MB blocks. As new data arrives it creates a new block and then deletes an old block if it is full. This process cycles through the full circular buffer once every 10 - 60 seconds. We are running on an x86_64 RH5 and RH6 and compiling with the Intel 14 compiler.
We are seeing a problem where the overall process memory usage grows over time until the OS runs out of memory and eventually the box dies. We have been looking for memory leaks and running the process through TotalView trying to determine where the memory is going and are not seeing any reported leaks.
On the heap report produced by total view we saw the 10GB of allocated memory for the stored data, but we also saw 4+ GB of "deallocated" memory. Looking through the heap display, it appeared that our heap was very fragmented. There would be a large chunk of "allocated" memory interspersed with large chunks of "deallocated" memory.
Is the "deallocated" memory memory that has been freed by my process but not reclaimed by the OS and is it reasonable to think that this may be the source of our memory "leak"?
If so, how do I get the OS to reclaim the memory?
Do we need to rework our process to reuse discarded data blocks instead of relying on the OS to do our memory management for us?