As pyCthon points out, this other question explains that size_t
is the right type to use for the size here, as it is guaranteed to be big enough to allow up to the maximum for your architecture.
Secondly, the .resize()
method doesn't really need calling each time. Instead construct the new vector and .push_back(newvec)
to add it to the vector. The internal allocator will allocate space as it sees fit, and is generally the best option there - it will generally require O(log n) reallocations, which are important here; if the vector has to reallocate because it needs more space, you could end up continuously re-copying the entire array to new blocks of memory.
Even better, if you could work out the total size of the arrays at the start, do so. That way there will only be 1 allocation at the start if you call .reserve(size)
, and then use .push_back()
for each element, as it will allocate the whole block at the beginning.
If you want to know the maximum number of elements a vector can take on your architecture, call vector::max_size(). Example from cplusplus.com:
// comparing size, capacity and max_size
#include <iostream>
#include <vector>
using namespace std;
int main ()
{
vector<int> myvector;
cout << "max_size: " << myvector.max_size() << "\n";
return 0;
}
Running this on ideone.com quickly gets me a max size of 1,073,741,823, and if the vector is a vector< vector< unsigned int > >
instead, I get 357,913,941.