Why do we still use structures and typedef
s (or using
s) for metaprogramming?
Look at the code in this question - Inferring the call signature of a lambda or arbitrary callable for "make_function" :
template<typename T> struct remove_class { };
template<typename C, typename R, typename... A>
struct remove_class<R(C::*)(A...)> { using type = R(A...); };
template<typename T, bool> struct get_signature_impl { };
template<typename R, typename... A>
struct get_signature_impl<R(A...), true> { using type = R(A...); };
template<typename R, typename... A>
struct get_signature_impl<R(*)(A...), true> { using type = R(A...); };
template<typename T>
struct get_signature_impl<T, true> { using type = typename remove_class<
decltype(&std::remove_reference<T>::type::operator())>::type; };
There is a lot of weird tricks like that bool
, noisy keywords like typename
, redundant stuff like struct get_signature_impl;
.
It's great that we got the using
keyword in C++11, but it doesn't make much difference.
In C++11 we have decltype
and trailing-return-type. With this power, we can drop all the ugly metastructures, and write beautiful metafunctions.
So, we can rewrite the code above:
template<typename C, typename R, typename... A> auto make_function_aux(R(C::*)(A...)) -> std::function<R(A...)>;
template<typename C, typename R, typename... A> auto make_function_aux(R(C::*)(A...) const) -> std::function<R(A...)>;
template<typename R, typename... A> auto make_function_aux(R(A...)) -> std::function<R(A...)>;
template<typename R, typename... A> auto make_function_aux(R(*)(A...)) -> std::function<R(A...)>;
template<typename T> auto make_function_aux(const T&) -> decltype(make_function_aux(&T::operator()));
template<typename F> auto make_function(F&& f) -> decltype(make_function_aux(f)) { return decltype(make_function_aux(f))(std::forward<F>(f)); }
Are there any situations where template partial specialisation is better than function overloading with decltype
for matching template arguments, or is this just a case of programmer inertia?