As we know, the CPO can use concepts to check the performance requirements(like input type, output type, etc.) of the user-defined function reload that it found. But they can't check the semantics of the user's function, right? Like the following codes show:
namespace My_A{
struct A{};
void some_name(A);//not fit
}
namespace My_B{
struct B{};
int some_name(B);//fit, but do unrelated matters
}
template<typename T>
concept usable_some_name =
requires(T arg) {
some_name(arg) -> int;
};
struct __use_fn{
template<typename T>
int operator()(T a)
{
if constexpr (usable_some_name<T>)
return some_name(a);
// ...
}
};
__use_fn use{};
It can weed out unqualified functions, but there is no guarantee that a "qualified" function will do what it is expected to do. If I want to make sure my code is error-free, either I have to circumvent the names' use of all the possible touched CPO used (which is impossible), or I have to make sure that my function of the same name (if any) has the semantics that those CPO expects (which also seems like an unreasonable burden) when all the CPO may be used.
Is that an unreasonable request that I must define my function to behave the same way as the ranges algorithm wanted every time they have just the same name that the ranges algorithm uses?
Compared to the traditional method:
template <>
struct hash<my_type>
{
// ...
}
or
priority_queue<my_type, vector<my_type>, my_type_greater> pq;
We can easily find that in those traditional ways we can make our customization be called only when we clearly want, without any risk of miscall as CPO made.
Is the current CPO bringing something that is an over-coupling? (I know what it is designed for and what its advantages are, though) Is there any better way (maybe tag_invoke
or what, I don't know) to resolve this problem?