Comparing pushing data (1 million numbers) with and without prior resizing into std::vector. To my amusement , I found the later ( without resizing) to be faster , which is contrary to expectation. What happened ? I am using MS VC++ 2017 compiler.
double times = 1000000;
vector<double> vec1;
auto tp_start = chrono::high_resolution_clock::now();
for (double i=0; i < times; i++)
{
vec1.push_back(i);
}
auto lapse = chrono::high_resolution_clock::now() - tp_start;
cout << chrono::duration_cast<chrono::milliseconds>(lapse).count() << " ms : push without prior resize \n"; // 501 ms
vector<double> vec2;
vec2.resize(times); // resizing
tp_start = chrono::high_resolution_clock::now();
for (double i = 0; i < times; i++)
{
vec2[i] = i; //fastest
// vec2.push_back(i); //slower
}
lapse = chrono::high_resolution_clock::now() - tp_start;
cout << chrono::duration_cast<chrono::milliseconds>(lapse).count() << " ms : push with prior resizing \n"; // 518 ms , shouldn't this be faster theoritically
Edited:
After this change: vec2.resize(times); it works faster
After this change: vec2.reserve(times); it works even faster
After this change: vec2[i] = i; becomes super fast
Any advise what is the best practice?
Edited 2 ( compiler in optimized mode)
10 million data :
120ms : 41ms reserve & pushback
121ms : 35ms resize & vec[i]
100 million data :
1356ms : 427ms reserve & pushback
1345ms : 364ms resize & vec[i]
vec[i] still wins