2

it's my very first question here on stackoverflow. I have largely looked for a reason for what I'm experiencing with the following lines of code:

unsigned long long _mem1= getUsedVirtualMemory();
vector.erase(vector.begin() + _idx);
contained= false; // don't stop the loop
_idx--; // the removed object has redefined the idx to be consider.

_mem1 = getUsedVirtualMemory() - _mem1;
if (_mem1 > 0) printf("Memory - 2 mem1: %lu\n" , _mem1);

I have a huge memory consumption in my program and after an intensive debug session, some printfs and time consuming analyses, I arrived to this point:

getUsedVirtualMemory is implemented with the following code:

PROCESS_MEMORY_COUNTERS_EX pmc;
GetProcessMemoryInfo(GetCurrentProcess(), (PROCESS_MEMORY_COUNTERS*) &pmc, sizeof(pmc));
SIZE_T virtualMemUsedByMe = pmc.PrivateUsage;
return virtualMemUsedByMe;

to obtain the amount of virtual memory allocated by the process; the vector is a vector of objects (not pointers).

In most cases the vector's erase method seems to work as expected, but in some cases it looks like the erase method of that vector increases the memory used by the process instead of freeing it. I'm using the Windows system function GetProcessMemoryInfo in a lot of situations around the code to debug this problem and it seems to return an actual value for used virtual memory.

I'm using Visual Studio C++ 2010 Professional.

Let me know if more information are needed.

Thanks for any replies.

UPDATE:

Everything you wrote in your replies is correct and I forgot the following details:

  1. I already know that a vector has a size (actual number of elements) and a capacity (allocated slots to store elements)
  2. I already know that the erase method does not free memory (I looked for a lot of documentation about that method)
  3. finally, I will add other elements to that vector later, so I don't need to shrink that vector.

The actual problem is that in that case, the value of "_mem1" in the last line of code shows a difference of 1.600.000 bytes: unjustified increase of memory, while I expected to be 0 bytes.

Also in the case where the value of used memory after the operation would be less than the first one, I would expect a very big number for what is explained for instance at Is unsigned integer subtraction defined behavior?

Instead of the expected results I get a value greater than 0 but relatively short.

To better understand the incidence of the problem, iterating some thousands of times on that piece of code, unexpectedly allocates about 20 Gb of virtual memory.

Community
  • 1
  • 1
Ivano85
  • 98
  • 1
  • 11
  • 3
    Freeing allocations made by runtime library functions does not necessarily release anything back to the operating system. Memory allocators will generally keep a pool of available blocks on hand to satisfy later allocations, without wasting time handing the memory back to the operating system only to re-request it later. So it is not surprising that you do not see the memory usage decrease. – nobody Dec 10 '14 at 18:03
  • I definitely agree with you, but what I can't justify is "why a erase operation should produce an allocation of about 1.5Mb of extra memory?" – Ivano85 Dec 11 '14 at 09:44

5 Answers5

4

A vector has :

  • a size(), which indicates how many active elements are in the container
  • a capacity(), which tells how many elements are reserved in memory

erase() changes the size to zero. It does not free memory capactiy allocated.

You can use shrink_to_fit() which makes sure that the capacity is reduced to the size.

Changing the size with resize() or the capacity with reserve(), may increase memory allocated if necessary, but it does not necessarily free memory if the new size/capacity is lower than the existing capacity.

Christophe
  • 68,716
  • 7
  • 72
  • 138
  • 3
    Unfortunately, `shrink_to_fit()` is a non-binding request so if you need certainty you're stuck with [this old swap trick](http://stackoverflow.com/a/1111099/445976). – Blastfurnace Dec 10 '14 at 18:10
  • True, it's a non binding request. However the MSVC 2010 implementation that OP uses seems to ensure that request is fulfilled: http://msdn.microsoft.com/fr-fr/library/dd647619%28v=vs.100%29.aspx – Christophe Dec 10 '14 at 19:15
3

It's because erase will not free memory, it just erases elements. Take a look at Herbs.

To (really) release the memory you could do (from reference link):

The Right Way To "Shrink-To-Fit" a vector or deque

So, can we write code that does shrink a vector "to fit" so that its capacity is just enough to hold the contained elements? Obviously reserve() can't do the job, but fortunately there is indeed a way:

vector<Customer>( c ).swap( c );
// ...now c.capacity() == c.size(), or
// perhaps a little more than c.size()
Community
  • 1
  • 1
MatiasFG
  • 576
  • 2
  • 8
  • Nice trick: you create a temporary unamed empty vector and do the swap. As anonymous temps are automatically destroyed at the end of the statement in which they are created, the memory gets released. However a litle tricky to read. Is there a performance difference with the simpler `c=vector();` which is easier to understand ? – Christophe Dec 10 '14 at 19:06
  • @Christophe: Actually, this example creates a __copy__ of the vector and then swaps it with the original. Hopefully the copy's `capacity()` is actually less than the original's and it's worth the momentary increase in memory use and element copying. – Blastfurnace Dec 10 '14 at 19:23
  • Right, it is basically 'trimming' itself :) – MatiasFG Dec 10 '14 at 19:25
  • @Blastfurnace yes, but as all the elements where previously erased it trims down to empty. This is why I asked if it wouldn't be simpler to just assign an empty vector. – Christophe Dec 10 '14 at 19:34
  • @Christophe this is not emptying the vector, it is freeing the 'extra' memory that vector (possibly) reserves while growing. Take a look a the reference, it should explain this idiom :) – MatiasFG Dec 10 '14 at 19:39
  • @Christophe: This example is how we did `shrink_to_fit()` before it existed in the standard, note the container being passed to the temporary's constructor. It's not the `release_capacity()` trick of swapping with an empty container. – Blastfurnace Dec 10 '14 at 19:41
  • I'm evaluating if a vector shrink may help to reduce the incidence of the problem, but I think that extra memory used by the vector as a "buffer", has a relatively low incidence. As specified with the update to the question, I expected to see a 0 change of used memory or a "overflow" in the subtraction. – Ivano85 Dec 11 '14 at 09:47
0

vector.ersase() is only guaranteed to remove elements from the vector, is is not guaranteed to reduce the size of the underlying array (as that process is rather expensive). IE: It only zeros out data, it doesn't necessarily deallocate it.

If you need to have a vector that is only as large as the elements it contains, try using vector.resize(vector.size())

slb
  • 761
  • 1
  • 5
  • 13
  • 2
    Your last example won't release any memory, `resize()` with the same or smaller `size()` doesn't reduce `capacity()`. – Blastfurnace Dec 10 '14 at 18:00
  • Thank you for your reply and yes, I agree with you, but why in a lot of cases it produces an allocation of about 1.5Mb of memory (supposedly lost and with non constant amount)? I expected the difference of allocated memory after the erase would be 0. Please refer to the update of the question. – Ivano85 Dec 11 '14 at 09:53
0

IIRC in a Debug build on windows, new is actually #defined to be DEBUG_NEW which causes (amongst other things) memory blocks not to be actually freed, but merely marked as 'deleted'.

Do you get the same behaviour with a release build?

Richard Hodges
  • 68,278
  • 7
  • 90
  • 142
  • Yes unfortunately I get the same behaviour with a release build, it only allocates less memory (about 50%). – Ivano85 Dec 10 '14 at 20:32
0

One part of the puzzle might be that std::vector cannot delete entries from the underlying memory buffer if they are not at the end of the buffer (which yours aren't), so the kept entries are moved - potentially to an altogether different buffer. Since you're erasing the first element, std::vector is allowed (since the standard states that erase() invalidates all iterators at/after the point of erasure, all of them in your case) to allocate an additional buffer to copy the remaining elements to, and then discard the old buffer after copying. So you may end up with two buffers being in use at the same time, and your memory manager will likely not return the discarded buffer to the operating system, but rather keep the memory around to re-use it in a subsequent allocation. This would explain the memory usage increase for a single one of your loop iterations.

Dreamer
  • 1,139
  • 9
  • 18