C++' vector<> guarantees an amortized constand ovehead if you append any data. For me this actually would work only if you have a 100% capacity increment if the vector<> is full; libstdc++ and libc++ have these size increments. But I had some strange behaviour with MSVC++:
#include <iostream>
#include <vector>
#include <cmath>
using namespace std;
int main()
{
vector<size_t> vi;
for( size_t i = 0x100000; i--; )
{
size_t before = vi.capacity();
vi.emplace_back( i );
if( double incr = ((double)vi.capacity() / (double)before - 1.0) * 100.0; !isinf( incr ) && incr )
cout << (int)(incr + 0.5) << "%" << endl;
}
}
MSVC++ reports a 50% size-increment. Is this really according to the standard ?
This code calculates the overhead of insertion at different size-increment percentages:
#include <iostream>
#include <cstdint>
using namespace std;
int main()
{
auto time1M = []( double pctIncrement ) -> double
{
double increment = 1.0 + pctIncrement / 100.0;
int64_t time = 0;
constexpr size_t MAX_SIZE = 1'000'000;
for( size_t capacity = 0, size = 0; size != 1'000'000; ++size )
if( size == capacity )
capacity = (size_t)(size * increment),
capacity += capacity == size,
time += size;
return time / (double)MAX_SIZE;
};
for( double pctIncrement = 10.0; pctIncrement <= 100.0; pctIncrement += 10.0 )
cout << pctIncrement << "%: " << time1M( pctIncrement ) << endl;
}
It prints the following:
10%: 10.1707
20%: 5.65891
30%: 4.32914
40%: 3.46261
50%: 2.09975
60%: 1.78124
70%: 2.09618
80%: 1.35964
90%: 1.91513
100%: 1.04858
A constant overhead of 1.0 can only be reached at a 100% size increment.
Any questions ?