So I want to write an automatic !=
:
template<typename U, typename T>
bool operator!=(U&& u, T&& t) {
return !( std::forward<U>(u) == std::forward<T>(t) );
}
but that is impolite1. So I write
// T() == U() is valid?
template<typename T, typename U, typename=void>
struct can_equal:std::false_type {};
template<typename T, typename U>
struct can_equal<
T,
U,
typename std::enable_if<
std::is_convertible<
decltype( std::declval<T>() == std::declval<U>() ),
bool
>::value
>::type
>: std::true_type {};
which is a type traits class that says "is t == u
valid code that returns a type convertible to bool
".
So I improve my !=
:
template<typename U, typename T,
typename=typename std::enable_if<can_equal<T,U>::value>::type
>
bool operator!=(U&& u, T&& t) {
return !( std::forward<U>(u) == std::forward<T>(t) );
}
and now it only is a valid override if ==
exists. Sadly, it is a bit greedy:
struct test {
};
bool operator==(const test&, const test&);
bool operator!=(const test&, const test&);
as it will snarf up pretty much every test() != test()
rather than the above !=
being called. I think this is not desired -- I would rather call an explicit !=
than auto-forward to ==
and negate.
So, I write up this traits class:
template<typename T, typename U,typename=void>
struct can_not_equal // ... basically the same as can_equal, omitted
which tests if T != U
is valid.
We then augment the !=
as follows:
template<typename U, typename T,
typename=typename std::enable_if<
can_equal<T,U>::value
&& !can_not_equal<T,U>::value
>::type
>
bool operator!=(U&& u, T&& t) {
return !( std::forward<U>(u) == std::forward<T>(t) );
}
which, if you parse it, says "this sentence is false" -- operator!=
exists between T
and U
iff operator!=
does not exist between T
and U
.
Not surprisingly, every compiler I have tested segfaults when fed this. (clang 3.2, gcc 4.8 4.7.2 intel 13.0.1). I suspect that what I'm doing is illegal, but I would love to see the standard reference. (edit: What I'm doing is illegal, because it induces an unbounded recursive template expansion, as determining if my !=
applies requires that we check if my !=
applies. The version linked in the comments, with #if 1
, gives a sensible error).
But my question: is there a way I can convince my SFINAE based override to ignore "itself" when deciding if it should fail or not, or somehow get rid of the self referential issue somehow? Or lower the precedence of my operator!=
low enough so any explicit !=
wins out, even if it is otherwise not as good a match?
The one that doesn't check for "!=
does not exist" works reasonably well, but not well enough for me to be as impolite as to inject it into the global namespace.
The goal is any code that would compile without my "magic" !=
does exactly the same thing once my "magic" !=
is introduced. If and only if !=
is otherwise invalid and bool r = !(a==b)
is well formed should my "magic" !=
kick in.
Footnote 1: If you create a template<typename U, typename T> bool operator!=(U&& u, T&& t)
, SFINAE will think that every pair of types has a valid !=
between them. Then when you try to actually call !=
, it is instantiated, and fails to compile. On top of that, you stomp on bool operator!=( const foo&, const foo& )
functions, because you are a better match for foo() != foo()
and foo a, b; a != b;
. I consider doing both of these impolite.