You'd probably have to ask William Kahan, the primary architect behind IEEE 754-1985, but this answer sheds some light on the topic:
more importantly, there was no isnan( ) predicate at the time that NaN was formalized in the 8087 arithmetic; it was necessary to provide programmers with a convenient and efficient means of detecting NaN values that didn’t depend on programming languages providing something like isnan( ) which could take many years. I’ll quote Kahan’s own writing on the subject:
Were there no way to get rid of NaNs, they would be as useless as Indefinites on CRAYs; as soon as one were encountered, computation would be best stopped rather than continued for an indefinite time to an Indefinite conclusion. That is why some operations upon NaNs must deliver non-NaN results. Which operations? … The exceptions are C predicates “ x == x ” and “ x != x ”, which are respectively 1 and 0 for every infinite or finite number x [emphasis added] but reverse if x is Not a Number ( NaN ); these provide the only simple unexceptional distinction between NaNs and numbers [emphasis added] in languages that lack a word for NaN and a predicate IsNaN(x).
If +inf
wasn't equal to +inf
, the x != x
test for NaNs wouldn't work because it would catch infinities as well. Back in 1985, a C programmer could have written:
#define is_nan(x) ((x) != (x))
#define is_pos_inf(x) ((x) == 1.0/0.0)
#define is_neg_inf(x) ((x) == -1.0/0.0)
With inf != inf
, you'd need something like:
#define is_nan(x) (!((x) >= 0) && !((x) <= 0))
#define is_pos_inf(x) ((x) != (x) && (x) > 0.0)
#define is_neg_inf(x) ((x) != (x) && (x) < 0.0)
I can see your point and I agree that having +inf != +inf
is more correct from a purely mathematical standpoint. But IMO, it doesn't outweigh the practical considerations.
The [sets] of natural numbers and rational numbers, both are infinite but [have] not the same [cardinality].
This hasn't much to do with floating-point calculations.
If you have X=1e200 and Y=1e300 (both, X and Y are 64-bit doubles), so x==y is false, but x*1e200==y*1e200 is true true (both are +inf), which is mathematical incorrect.
Floating point math is inherently mathematically incorrect. You can find many finite floating-point numbers, X
, Y
, Z
, with X != Y
, where X <op> Z == Y <op> Z
.
I do not see any advantage, or any application which require the fact that +inf == +inf. You should not compare any floating point values with == anyway.
I also can't see an application that would require +inf != +inf
.
X==Y is [...] true, if X-Y==0 is true, but inf-inf is NaN.
This is in fact an inconsistency that +inf != +inf
would solve. But it seems like a minor detail to me.