My program use 569MB of memory and it need to use 500MB only, I have lot of std::vector with different size Is there a way to set the capacity to the number of element to avoid the memory overhead. (I don't case about performance, memory is key)
-
Not guaranteed to work, you may use the shrink_to_fit C++11 function to make vectors be just large enough to hold the data. – Paul Stelian Jul 18 '16 at 08:48
-
1You may try if the [`std::vector::shrink_to_fit()`](http://en.cppreference.com/w/cpp/container/vector/shrink_to_fit) function helps. – πάντα ῥεῖ Jul 18 '16 at 08:49
-
1Is boost an option? then use http://www.boost.org/doc/libs/1_61_0/doc/html/boost/container/static_vector.html – PiotrNycz Jul 18 '16 at 08:57
-
1@PaulStelian: That's probably too late here. If the program is limited to 500 MB, the OS may kill the process or refuse a memory allocation. The latter will turn into a `std::bad_alloc` when the vector becomes too big. – MSalters Jul 18 '16 at 08:59
4 Answers
How to limit the capacity of std::vector to the number of element
The best that you can do, is to reserve the required space before you add the elements. This should also have the best performance, because there are no reallocations and copying caused by it.
If that is not practical, then you can use std::vector::shrink_to_fit()
after the elements were added. Of course, that doesn't help if the allocation may never peak above the set limit.
Technically, neither of these methods are guaranteed by the standard to match the capacity with size. You are relying on the behaviour of the standard library implementation.

- 232,697
- 12
- 197
- 326
-
5`v.reserve(n)` guarantees that the vector is able to hold *at least* n elements, but capacity may be greater than n (c++17 23.3.11.3.3), so no guarantee of avoided over head at all. As far as I interpret 23.3.11.3.9, even shrink_to_fit does not guarantee size() == capacity() afterwards... – Aconcagua Jul 18 '16 at 09:03
-
@Aconcagua good point, I thought that there was a guarantee when calling `reserve` on a fresh vector, but I was apparently mistaken. There seems to be no way to guarantee that capacity matches size with vector. I've changed my answer to reflect that – eerorika Jul 18 '16 at 09:24
-
There is no guarantee undernthe standard that `new char[1000]` will not allocate a gb either, so the `unique_ptr` approach has no "guarantee of no overhead". – Yakk - Adam Nevraumont Jul 18 '16 at 11:15
-
@Yakk oh dear. I'll take your word for it. There isn't really much guaranteed about dynamic memory then, is there? – eerorika Jul 18 '16 at 11:20
You are perhaps looking for the shrink_to_fit
method, see http://en.cppreference.com/w/cpp/container/vector/shrink_to_fit.
Or, if you are not able/allowed to use C++11, you may want to use the swap-to-fit idiom: https://en.wikibooks.org/wiki/More_C%2B%2B_Idioms/Shrink-to-fit
In C++11 (note that shrink_to_fit
may be ignored by the compiler):
vector<int> v;
// ...
v.shrink_to_fit();
The swap-to-fit idiom:
vector<int> v;
// ...
vector<int>( v ).swap(v);
// v is swapped with its temporary copy, which is capacity optimal

- 2,372
- 1
- 20
- 33
Write some wrapper and control size of your vector before pushing anything to it, or use fixed size std::array instead

- 1,364
- 17
- 33
-
1As far as I can see, there is no way to achieve a vector's capacity being equal to its size. std::array does so, but you need to know final size at compile time, as size is a template parameter and size can't be modified. If worst comes to worst, you might have to fall back to "good" old pre-c++11 arrays. Not nice, but I don't see another way to guarantee size constraints if (final) size is unknown at compile time... – Aconcagua Jul 18 '16 at 09:17
You can use custom allocator and feed the capacity you required to the template argument. modifyingthe example from this thread: Compelling examples of custom C++ allocators?
#include <memory>
#include <iostream>
#include <vector>
namespace my_allocator_namespace
{
template <typename T, size_t capacity_limit>
class my_allocator: public std::allocator<T>
{
public:
typedef size_t size_type;
typedef T* pointer;
typedef const T* const_pointer;
template<typename _Tp1 >
struct rebind
{
typedef my_allocator<_Tp1 , capacity_limit> other;
};
pointer allocate(size_type n, const void *hint=0)
{
if( n > capacity_limit ) {
return std::allocator<T>::allocate(capacity_limit );
}
return std::allocator<T>::allocate(n, hint);
}
void deallocate(pointer p, size_type n)
{
return std::allocator<T>::deallocate(p, n);
}
my_allocator() throw(): std::allocator<T>() { }
my_allocator(const my_allocator &a) throw(): std::allocator<T>(a) { }
template <class U,size_t N>
my_allocator(const my_allocator<U,N> &a) throw(): std::allocator<T>(a) { }
~my_allocator() throw() { }
};
}
using namespace std;
using namespace my_allocator_namespace;
int main(){
vector<int, my_allocator<int,20> > int_vec(10);
for(int i = 0 ;i < 20; i++)
{
std::cerr << i << "," << int_vec.size() << std::endl;
int_vec.push_back(i);
}
}
however should be cautious for accessing out of range indices