Suppose I have
std::vector<T1> vec1 {/* filled with T1's */};
std::vector<T2> vec2 {/* filled with T2's */};
and some function T1 f(T2)
which could of course be a lambda. What is the optimal way to concatenate vec1
and vec2
whilst applying f
to each T2
in vec2
?
The apparently obvious solution is std::transform
, i.e.
vec1.reserve(vec1.size() + vec2.size());
std::transform(vec2.begin(), vec2.end(), std::back_inserter(vec1), f);
but I say this is not optimal as std::back_inserter
must make an unnecessary capacity check on each inserted element. What would be optimal is something like
vec1.insert(vec1.end(), vec2.begin(), vec2.end(), f);
which could get away with a single capacity check. Sadly this is not valid C++. Essentially this is the same reason why std::vector::insert
is optimal for vector concatenation, see this question and the comments in this question for further discussion on this point.
So:
- Is
std::transform
the optimal method using the STL? - If so, can we do better?
- Is there a good reason why the
insert
function described above was left out of the STL?
UPDATE
I've had a go at verifying if the multiple capacity checks do have any noticeable cost. To do this I basically just pass the id function (f(x) = x
) to the std::transform
and push_back
methods discussed in the answers. The full code is:
#include <iostream>
#include <vector>
#include <iterator>
#include <algorithm>
#include <cstdint>
#include <chrono>
#include <numeric>
#include <random>
using std::size_t;
std::vector<int> generate_random_ints(size_t n)
{
std::default_random_engine generator;
auto seed1 = std::chrono::system_clock::now().time_since_epoch().count();
generator.seed((unsigned) seed1);
std::uniform_int_distribution<int> uniform {};
std::vector<int> v(n);
std::generate_n(v.begin(), n, [&] () { return uniform(generator); });
return v;
}
template <typename D=std::chrono::nanoseconds, typename F>
D benchmark(F f, unsigned num_tests)
{
D total {0};
for (unsigned i = 0; i < num_tests; ++i) {
auto start = std::chrono::system_clock::now();
f();
auto end = std::chrono::system_clock::now();
total += std::chrono::duration_cast<D>(end - start);
}
return D {total / num_tests};
}
template <typename T>
void std_insert(std::vector<T> vec1, const std::vector<T> &vec2)
{
vec1.insert(vec1.end(), vec2.begin(), vec2.end());
}
template <typename T1, typename T2, typename UnaryOperation>
void push_back_concat(std::vector<T1> vec1, const std::vector<T2> &vec2, UnaryOperation op)
{
vec1.reserve(vec1.size() + vec2.size());
for (const auto& x : vec2) {
vec1.push_back(op(x));
}
}
template <typename T1, typename T2, typename UnaryOperation>
void transform_concat(std::vector<T1> vec1, const std::vector<T2> &vec2, UnaryOperation op)
{
vec1.reserve(vec1.size() + vec2.size());
std::transform(vec2.begin(), vec2.end(), std::back_inserter(vec1), op);
}
int main(int argc, char **argv)
{
unsigned num_tests {1000};
size_t vec1_size {10000000};
size_t vec2_size {10000000};
auto vec1 = generate_random_ints(vec1_size);
auto vec2 = generate_random_ints(vec1_size);
auto f_std_insert = [&vec1, &vec2] () {
std_insert(vec1, vec2);
};
auto f_push_back_id = [&vec1, &vec2] () {
push_back_concat(vec1, vec2, [] (int i) { return i; });
};
auto f_transform_id = [&vec1, &vec2] () {
transform_concat(vec1, vec2, [] (int i) { return i; });
};
auto std_insert_time = benchmark<std::chrono::milliseconds>(f_std_insert, num_tests).count();
auto push_back_id_time = benchmark<std::chrono::milliseconds>(f_push_back_id, num_tests).count();
auto transform_id_time = benchmark<std::chrono::milliseconds>(f_transform_id, num_tests).count();
std::cout << "std_insert: " << std_insert_time << "ms" << std::endl;
std::cout << "push_back_id: " << push_back_id_time << "ms" << std::endl;
std::cout << "transform_id: " << transform_id_time << "ms" << std::endl;
return 0;
}
Compiled with:
g++ vector_insert_demo.cpp -std=c++11 -O3 -o vector_insert_demo
Output:
std_insert: 44ms
push_back_id: 61ms
transform_id: 61ms
The compiler will have inlined the lambda, so that cost can be safely be discounted. Unless anyone else has a viable explanation for these results (or is willing to check the assembly), I think it's reasonable to conclude there is a noticeable cost of the multiple capacity checks.