13

-freciprocal-math in GCC changes the following code

double a = b / c;

to

 double tmp = 1/c;
 double a = b * tmp;

In GCC manual, it's said that such an optimization is unsafe and is not sticked to IEEE standards. But I cannot think of an example. Could you give an example about this?

Alok Save
  • 202,538
  • 53
  • 430
  • 533
Kid
  • 305
  • 1
  • 5
  • I don't IEEE has rules about how a compiler must use floating point. For instance, X^2 is often strength-reduced to X*X producing typically faster programs but with different error than the computation specified by the original programmer. So, @Kid, have you looked at the IEEE standard to see what limits it places on compilers using such arithmetic? – Ira Baxter May 21 '12 at 03:52
  • @RaymondChen: After IEEE rounding (up? down? even? none?) is applied, might not your b = c = 3 example produce the exact same result in a? – Ira Baxter May 21 '12 at 03:54
  • 2
    I was choosing an intuitive example that works in base ten. – Raymond Chen May 21 '12 at 04:33
  • I'm confused. This question appears to be about floating point implementation on an binary machine (what GCC mostly supports). – Ira Baxter May 21 '12 at 04:57
  • 3
    The question is whether reciprocal math adheres to IEEE standards. It does not. For example, IEEE requires that `x/x=1` for all finite nonzero `x`, but the reciprocal version does not satisfy this requirement. For example, there are some values of `x` where `x * (1/x) = ±∞`. My b=c=3 example was an attempt to give a version of the answer that is easier to understand (since it's apparent that the OP is not familiar with the intricacies of floating point). – Raymond Chen May 21 '12 at 13:01
  • @RaymondChen Yes, I would like to have some example of x * (1/x) = ±∞, b=c=3 however is not a suitable example because it gives the right answer.... – Kid May 22 '12 at 17:26
  • @RaymondChen Thanks, Raymond. That proved the point. – Kid May 22 '12 at 18:34
  • @Kid The need for a concrete example (instead of merely an explanation) makes me wonder if this was a homework assignment... – Raymond Chen May 22 '12 at 19:37
  • @RaymondChen No, it is not an assignment. – Kid May 22 '12 at 20:47

2 Answers2

16

Dividing by 10 and multiplying by 0.1000000000000000055511151231257827021181583404541015625 are not the same thing.

R.. GitHub STOP HELPING ICE
  • 208,859
  • 35
  • 376
  • 711
  • True, it isn't. But then floating point doesn't give you a precise answer if you divide by ten, either. So the argument here seems to be how much error you are willing to accept in your floating point arithmetic. If you do standard IEEE float, you get a certain amount of error in your computation (including complete loss of precision in certain circumstances). If you do reciprocal math, you don't get IEEE-defined floating point error; whether you get more or less will depend on the actual computation you are doing. – Ira Baxter May 21 '12 at 03:50
  • 1
    @IraBaxter, division gives you a precise result if both operands are exactly represented, which they will be for integer values. – Mark Ransom May 21 '12 at 04:10
  • @MarkRansom: Uh, how do I get a precise (you mean exact?) result for "1.0/3.0"? – Ira Baxter May 21 '12 at 04:13
  • 1
    What am I missing? The OP seems to be asking about FP in general. Claiming you get great answers in special cases doesn't seem very interesting; 1 multiplied by the reciprocal of 1 gives perfect answers, but so what? I thought we were discussing special cases where precision isn't perfect. – Ira Baxter May 21 '12 at 05:01
  • 5
    This is why the option is unsafe. It breaks code that has an exact answer. And even if the rounding makes the answer come out the same, the inexact exception will wrongly be raised. An optimization is unsafe/wrong if it breaks even one piece of correct code; it doesn't have to have visibly broken results in all cases or even typical ones. – R.. GitHub STOP HELPING ICE May 21 '12 at 05:04
  • If you are writing floating point code and expecting an exact answer, you're going to be in big trouble. – Ira Baxter May 21 '12 at 06:09
  • 2
    @Ira: there are times when people know to expect exact answers. Also, even for those other times when people might not be taking such care, they may expect that answers produced on one system will match answers produced on another system. – Michael Burr May 21 '12 at 07:00
  • 3
    @Ira: also, for clarification, here's what GCC's docs say about the set of unsafe floating point optimizations: "This option is not turned on by any -O option since it can result in incorrect output for programs which depend on an exact implementation of IEEE or ISO rules/specifications for math functions. It may, however, yield faster code for programs that do not require the guarantees of these specifications." So the problem isn't expecting "exact answers" from floating point operations, it's expecting answers that are produced following certain specific rules. – Michael Burr May 21 '12 at 07:08
  • 1
    @Ira: That's only true if you don't understand floating point. – R.. GitHub STOP HELPING ICE May 21 '12 at 13:10
  • 3
    Most importantly: If you don't think this tiny inexactness is relevant in your software then go ahead and use -freciprocal-math. You are the intended user of this option. – phkahler May 21 '12 at 13:35
0

Perhaps I am thinking of a different compiler flag, but ...

Some processors have instructions for calculating the approximate reciprocal. RCPSS om the x86 (SIMD instruction) comes to mind; it has a relative error 1.5 ∗ 2^−12. Using that flag may allow the compiler to select an approx reciprocal instruction, which might not be a safe thing to do depending upon your application.

Hope this helps.

Sparky
  • 13,505
  • 4
  • 26
  • 27