4

I have recently analyzed an old piece of code compiled with VS2005 because of a different numerical behaviour in "debug" (no optimizations) and "release" (/O2 /Oi /Ot options) compilations. The (reduced) code looks like:

void f(double x1, double y1, double x2, double y2)
{
double a1, a2, d;

a1 = atan2(y1,x1);
a2 = atan2(y2,x2);
d = a1 - a2;
if (d == 0.0) { // NOTE: I know that == on reals is "evil"!
   printf("EQUAL!\n");
}

The function f is expected to print "EQUAL" if invoked with identical pairs of values (e.g. f(1,2,1,2)), but this doesn't always happen in "release". Indeed it happened that the compiler has optimized the code as if it were something like d = a1-atan2(y2,x2) and removed completely the assignment to the intermediate variable a2. Moreover, it has taken advantage of the fact that the second atan2()'s result is already on the FPU stack, so reloaded a1 on FPU and subtracted the values. The problem is that the FPU works at extended precision (80 bits) while a1 was "only" double (64 bits), so saving the first atan2()'s result in memory has actually lost precision. Eventually, d contains the "conversion error" between extended and double precision.

I know perfectly that identity (== operator) with float/double should be avoided. My question is not about how to check proximity between doubles. My question is about how "contractual" an assignment to a local variable should be considered. By my "naive" point of view, an assignment should force the compiler to convert a value to the precision represented by the variable's type (double, in my case). What if the variables were "float"? What if they were "int" (weird, but legal)?

So, in short, what does the C standard say about that cases?

Marius Bancila
  • 16,053
  • 9
  • 49
  • 91
Giuseppe Guerrini
  • 4,274
  • 17
  • 32
  • I think the standard makes no promises for floating-point returns of functions contained in libraries. But that's my unexpert understanding on the subject, so not very helpful. – dwn Jan 07 '15 at 21:39
  • By default Visual Studio sets the precision to "precise" which allows it to do these sort of optimizations. You could try setting it to strict and see what happens. – Mysticial Jan 07 '15 at 21:40
  • If all else fails, casting all the intermediates to (double) should do the trick. At least it worked when I was trying to get IEEE-specific behavior for a very specific operation and I didn't want to turn on global fp:strict. – Mysticial Jan 07 '15 at 21:42
  • If you to get rid of the x87 excess precision, typically there's a specific compilation flag; on gcc there's `-fexcess-precision=standard` (and `-ffloat-store`), for VC++ `/fp:strict`. – Matteo Italia Jan 07 '15 at 21:47
  • Report the value of `FLT_EVAL_METHOD` in debug and release mode. – chux - Reinstate Monica Jan 07 '15 at 21:52
  • For debugging, change to `volatile double a1, a2, d;`. This will force intermediate saves and reads. – chux - Reinstate Monica Jan 07 '15 at 21:54
  • 1
    @chux VS 2005 does not even claim to implement C99, so it does not have to define `FLT_EVAL_METHOD`. And it is not as if compilers always did what they say when they do define it (http://stackoverflow.com/questions/17663780/is-there-a-document-describing-how-clang-handles-excess-floating-point-precision ) – Pascal Cuoq Jan 07 '15 at 21:54
  • @Pascal Cuoq True about VS. IAC, this is certainly the result of an inconsistent use of wider FP. – chux - Reinstate Monica Jan 07 '15 at 21:57
  • @All: at the end, VS2005 is probably "too old" and, well, not perfect. Not C99, indeed (+1, Pascal). My question may also become "how free to remove assignments is a compiler?". That's why I also asked "What if the variables were "float"? What if they were "int" (weird, but legal)?" – Giuseppe Guerrini Jan 07 '15 at 22:01
  • @GiuseppeGuerrini Assignments have effects (conversion to the type of the destination lvalue, and furthermore removal of excess precision which is not automatically implied by the conversion in FLT_EVAL_METHOD>2). These effects are part of the meaning of the source code and must take place in the assembly code. Apart from that, what's an “assignment”? Does assigning to a register count? If block-scope variable `x` is only used as part of the expression `x+1`, can you never assign `x` but directly compute `x+1`? – Pascal Cuoq Jan 07 '15 at 22:08
  • @GiuseppeGuerrini What I mean is that there is no notion of “assignment” in assembly code. The only demand one can make is that the functional effects of assignments in the source code are respected in the results computed by the binary code. Non-functional effects (e.g. a place in memory is overwritten with the value, time is spent doing the write) do not have to take place (except for `volatile` variables, that's another story). – Pascal Cuoq Jan 07 '15 at 22:11
  • @Pascal: "what's an assigment?". You have already answered! Since the compiler's goal is (should be) to preserve the final result of a set of operations, the assignment of a intermediate/internal/temporary (useless?) variable is to convert an intermediate value to a particular type (and in my case it does not happen, hum...). – Giuseppe Guerrini Jan 07 '15 at 22:16

1 Answers1

5

By my "naive" point of view, an assignment should force the compiler to convert a value to the precision represented by the variable's type (double, in my case).

Yes, this is what the C99 standard says. See below.

So, in short, what does the C standard say about that cases?

The C99 standard allows, in some circumstances, floating-point operations to be computed at a higher precision than that implied by the type: look for FLT_EVAL_METHOD and FP_CONTRACT in the standard, these are the two constructs related to excess precision. But I am not aware of any words that could be interpreted as meaning that the compiler is allowed to arbitrarily reduce the precision of a floating-point value from the computing precision to the type precision. This should, in a strict interpretation of the standard, only happen in specific spots, such as assignments and casts, in a deterministic fashion.

The best is to read Joseph S. Myers's analysis of the parts relevant to FLT_EVAL_METHOD:

C99 allows evaluation with excess range and precision following certain rules. These are outlined in 5.2.4.2.2 paragraph 8:

Except for assignment and cast (which remove all extra range and precision), the values of operations with floating operands and values subject to the usual arithmetic conversions and of floating constants are evaluated to a format whose range and precision may be greater than required by the type. The use of evaluation formats is characterized by the implementation-defined value of FLT_EVAL_METHOD:

Joseph S. Myers goes on to describe the situation in GCC before the patch that accompanies his post. The situation was just as bad as it is in your compiler (and countless others):

GCC defines FLT_EVAL_METHOD to 2 when using x87 floating point. Its implementation, however, does not conform to the C99 requirements for FLT_EVAL_METHOD == 2, since it is implemented by the back end pretending that the processor supports operations on SFmode and DFmode:

  • Sometimes, depending on optimization, a value may be spilled to memory in SFmode or DFmode, so losing excess precision unpredictably and in places other than when C99 specifies that it is lost.
  • An assignment will not generally lose excess precision, although -ffloat-store may make it more likely that it does.

The C++ standard inherits the definition of math.h from C99, and math.h is the header that defines FLT_EVAL_METHOD. For this reason you might expect C++ compilers to follow suit, but they do not seem to be taking the issue as seriously. Even G++ still does not support -fexcess-precision=standard, although it uses the same back-end as GCC (which has supported this option since Joseph S. Myers' post and accompanying patch).

Pascal Cuoq
  • 79,187
  • 7
  • 161
  • 281