I find myself using C++11 more and more lately, and where I would have been using iterators in the past, I now am using range-based for loops whenever possible:
std::vector<int> coll(10);
std::generate(coll.begin(), coll.end(), []() { return rand(); } );
C++03:
for (std::vector<int>::const_iterator it = coll.begin(); it != coll.end(); ++it) {
foo_func(*it);
}
C++11:
for (auto e : coll) { foo_func(e); }
But what if the collection element type is a template parameter? foo_func()
probably will be overloaded to pass complex (= expensive to copy) types by const reference, and simple ones by value:
foo_func(const BigType& e) { ... };
foo_func(int e) { ... };
I didn't give this much thought while I was using the the C++03-style code above. I would iterate the same way, and since dereferencing a const_iterator produces a const reference, everything was fine. But using the C++11 range-based for loop, I need to use a const reference loop variable to obtain the same behavior:
for (const auto& e : coll) { foo_func(e); }
And suddenly I wasn't sure anymore, if this wouldn't introduce unnecessary assembly instructions if auto
was a simple type (such as a behind-the-scene pointer to implement the reference).
But compiling a sample application confirmed that there is no overhead for simple types, and that this seems to be the generic way to use range-based for loops in templates. If this hadn't been the case, boost::call_traits::param_type would have been the way to go.
Question: Are there any guarantees in the standard?
(I realize that the issue is not really related to range-based for loops. It's also present when using const_iterators.)