The simplest way is:
auto const predicate = [](int const value) { return value % 2 == 0; };
std::copy_if(begin(src), end(src), back_inserter(dest), predicate);
which relies on push_back
.
Now, indeed, this may trigger memory reallocation. However I'd like to underline that push_back
has amortized constant complexity, meaning that in average it is O(1), which is achieved by having an exponential growth behavior (so that the number of allocations performed is O(log N)).
On the other hand, if you have 1 million elements, only 5 of which being even, it will not allocate 4MB of memory up-front, only to relinquish it for only 20 bytes later on.
Therefore:
- it's optimal when the distribution is skewed toward odd numbers, because it does not over-allocate much
- it's close to optimal otherwise, because it does not reallocate much
Even more interesting, if you have an idea of the distribution up-front, you can use resize
and shrink_to_fit
:
// 90% of the time, 30% of the numbers are even:
dest.reserve(src.size() * 3 / 10);
auto const predicate = [](int const value) { return value % 2 == 0; };
std::copy_if(begin(src), end(src), back_inserter(dest), predicate);
dest.shrink_to_fit();
This way:
- if there were less than 30%,
shrink_to_fit
might trim the excess
- if there were 30%, bingo
- if there were more than 30%, re-allocations are triggered as necessary, still following that O(log N) pattern anyway
Personal experience tells me that the call to reserve
is rarely (if ever) worth it, amortized constant complexity being really good at keeping costs down.
Note: shrink_to_fit
is non-binding, there is no guaranteed way to get the capacity
to be equal to the size
, the implementation chooses what's best.