I have recently analyzed an old piece of code compiled with VS2005 because of a different numerical behaviour in "debug" (no optimizations) and "release" (/O2 /Oi /Ot options) compilations. The (reduced) code looks like:
void f(double x1, double y1, double x2, double y2)
{
double a1, a2, d;
a1 = atan2(y1,x1);
a2 = atan2(y2,x2);
d = a1 - a2;
if (d == 0.0) { // NOTE: I know that == on reals is "evil"!
printf("EQUAL!\n");
}
The function f
is expected to print "EQUAL" if invoked with identical pairs of values (e.g. f(1,2,1,2)
), but this doesn't always happen in "release". Indeed it happened that the compiler has optimized the code as if it were something like d = a1-atan2(y2,x2)
and removed completely the assignment to the intermediate variable a2
. Moreover, it has taken advantage of the fact that the second atan2()
's result is already on the FPU stack, so reloaded a1
on FPU and subtracted the values. The problem is that the FPU works at extended precision (80 bits) while a1
was "only" double (64 bits), so saving the first atan2()
's result in memory has actually lost precision. Eventually, d
contains the "conversion error" between extended and double precision.
I know perfectly that identity (==
operator) with float/double should be avoided. My question is not about how to check proximity between doubles. My question is about how "contractual" an assignment to a local variable should be considered. By my "naive" point of view, an assignment should force the compiler to convert a value to the precision represented by the variable's type (double, in my case). What if the variables were "float"? What if they were "int" (weird, but legal)?
So, in short, what does the C standard say about that cases?