6

I know that vectors double in size whenever their capacity() is exceeded. This operation takes some time which is why vectors are supposed to have amortized constant time for addition of elements with push_back().

  1. What I'm wondering is... what happens when a vector shrinks so that its size() is less than half of the capacity().

  2. Do vectors ever relinquish the memory which they use, or is it just gone until the vector is destroyed?

It could be a lot of wasted memory if they don't shrink in size ever, but I've never heard of them having that feature.

John Humphreys
  • 37,047
  • 37
  • 155
  • 255
  • 4
    `I know that vectors double in size` Actually thats implementation defined. MS vectors grow by a factor of 1.5 – RedX Aug 24 '11 at 15:15
  • If you're going to resize a lot, you should perhaps consider a data structure which uses only the memory it needs; an `std::set`, or an `std::map`, or you could roll your own – dario_ramos Aug 24 '11 at 15:18
  • possible duplicate of [Freeing allocated memory](http://stackoverflow.com/questions/3248726/freeing-allocated-memory) – Jacob Aug 24 '11 at 15:23
  • @ dario_ramos: Agree up to the point where you say 'roll your own'. There are several more you can try before that and this will never be a good idea. – Martin York Aug 24 '11 at 15:53
  • @Dario - A set or a map uses *a lot* more memory than a `vector`, so the potential savings might not be that interesting. – Bo Persson Aug 24 '11 at 16:00
  • @Bo: Yes, because they are implemented using Red-Black balanced tree, IIRC. But if `elementSize >> std::mapOverheadPerElement` it's better to use a set or map – dario_ramos Aug 24 '11 at 16:04
  • @Martin: You're right, I forgot to consider the collections from `TR1`, boost and similar. But I'm not too familiar with those, so I can't say – dario_ramos Aug 24 '11 at 16:07
  • 1
    @dario_ramos: Or something closer to home like std::deque – Martin York Aug 24 '11 at 16:24

5 Answers5

13

No, it is never freed (i.e. capacity is never reduced) until destruction. A common idiom for freeing up some memory is to create a new vector of the correct size, and use that instead:

std::vector<type>(originalVector).swap(originalVector);

(inspired by "More Exceptional C++", Item #7)

SSJ_GZ
  • 781
  • 1
  • 6
  • 17
3

If you want to make sure your vector uses as little space as possible, you can say:

std::vector<Foo>(my_vector).swap(my_vector);

You could also call the shrink_to_fit() member function, but that is just a non-binding request.

fredoverflow
  • 256,549
  • 94
  • 388
  • 662
  • I edited the question to make the question clearer... There are two questions here. –  Aug 24 '11 at 15:20
  • 2
    when you move to a temporary, and then swap with that temporary, the occupied storage will likely not change, as the move is probably just taking over the pointers, and not doing any reallocations (which is the main point of move semantics). Also while shrink_to_fit is a nonbinding request, most libraries will implement it. Likewise creating a vector as a copy of another is not guaranteed to yield to a minimal capacity, though usually does. – PlasmaHH Aug 24 '11 at 15:31
0

Erasing elements from a vector or even clearing the vector entirely does not necessarily free any of the memory associated with that element. This is because the maximum size of a vector since its creation is a good estimate of the future size of the vector.

To use the shrink to fit idiom:

std::vector<int> v;
//v is swapped with its temporary copy, which is capacity optimal
std::vector<int>(v).swap(v);

Solution in C++0x

In C++0x some containers declare such idiom as function shrink_to_fit(), e.g. vector, deque, basic_string. shrink_to_fit() is a non-binding request to reduce capacity() to size().

0

They don't shrink in size.

It would not be generally useful to de-allocate the memory.

For example, a vector could shrink now but then grow again later. It would be a waste to deallocate the extra memory (and copy all the elements in the memory) only to reallocate it later.

If a vector is shrinking, there is also a good chance the vector will be shrunk to empty and then destroyed. In that case deallocating as it shrinks would be a waste of time as well.

It can also be a useful optimization to reuse a vector object so as to avoid allocating/dealocating the contents. That would be ruined by introducing deallocations as it shrinks.

Basically, most of the time deallocating the extra memory isn't worth it. In the few case where it is, you can specifically ask for dealocation.

Winston Ewert
  • 44,070
  • 10
  • 68
  • 83
-2

The memory is gone until the vector is destroyed.

RedX
  • 14,749
  • 1
  • 53
  • 76
  • Setting the capacity doesn't help because it's just a request for a minimum capacity. At least the last time I looked, maybe things have changed with the new standard. – john Aug 24 '11 at 15:18