19

While answering a question where I suggested -ffast-math, a comment pointed out that it is dangerous.

My personal feeling is that outside scientific calculations, it is OK. I also asume that serious financial applications use fixed point instead of floating point.

Of course if you want to use it in your project the ultimate answer is to test it on your project and see how much it affects it. But I think a general answer can be given by people who tried and have experience with such optimizations:

Can ffast-math be used safely on a normal project?

Given that IEEE 754 floating point has rounding errors, the assumption is that you are already living with inexact calculations.


This answer was particular illuminating on the fact that -ffast-math does much more than reordering operations that would result in a slightly different result (does not check for NaN or zero, disables signed zero just to name a few), but I fail to see what the effects of these would ultimately be in a real code.


I tried to think of typical uses of floating points, and this is what I came up with:

  • GUI (2D, 3D, physics engine, animations)
  • automation (e.g. car electronics)
  • robotics
  • industrial measurements (e.g. voltage)

and school projects, but those don't really matter here.

Community
  • 1
  • 1
bolov
  • 72,283
  • 15
  • 145
  • 224
  • 2
    Since much scientific software uses Fortran, which gives somewhat weaker guarantees about floating-point calculations than C/C++/ieee754, I'd argue that "outside scientific calculations" is probably not quite the right criterion. – EOF Aug 16 '16 at 15:38
  • 3
    The reference to "typical project" would appear to make this question much too broad, on top of this the question appears to be soliciting opinions. I have 25 years experience with software optimization and floating-point arithmetic in a variety of fields, and the best general answer I could give is "It depends". – njuffa Aug 16 '16 at 16:27

6 Answers6

16

One of the especially dangerous things it does is imply -ffinite-math-only, which allows explicit NaN tests to pretend that no NaNs ever exist. That's bad news for any code that explicitly handles NaNs. It would try to test for NaN, but the test will lie through its teeth and claim that nothing is ever NaN, even when it is.

This can have really obvious results, such as letting NaN bubble up to the user when previously they would have been filtered out at some point. That's bad of course, but probably you'll notice and fix it.

A more insidious problem arises when NaN checks were there for error checking, for something that really isn't supposed to ever be NaN. But perhaps through some bug, bad data, or through other effects of -ffast-math, it becomes NaN anyway. And now you're not checking for it, because by assumption nothing is ever NaN, so isnan is a synonym of false. Things will go wrong, spuriously and long after you've already shipped your software, and you will get an "impossible" error report - you did check for NaN, it's right there in the code, it cannot be failing! But it is, because someone someday added -ffast-math to the flags, maybe you even did it yourself, not knowing fully what it would do or having forgotten that you used a NaN check.

So then we might ask, is that normal? That's getting quite subjective, but I would not say that checking for NaN is especially abnormal. Going fully circular and asserting that it isn't normal because -ffast-math breaks it is probably a bad idea.

It does a lot of other scary things as well, as detailed in other answers.

harold
  • 61,398
  • 6
  • 86
  • 164
  • What if `a.cpp` compiled withOUT `-ffast-math` and call `b.cpp` compiled with `-ffast-math` ? Can `a.cpp` check `NaN` ? – somebody4 Jul 05 '21 at 03:26
13

I wouldn't recommend to avoid using this option, but I remind one instance where unexpected floating-point behavior struck back.

The code was saying like this innocent construct:

float X, XMin, Y;
if (X < XMin)
{
    Y= 1 / (XMin - X);
}

This was sometimes raising a division by zero error, because when the comparison was carried out, the full 80 bits representation (Intel FPU) was used, while later when the subtraction was performed, values were truncated to the 32 bits representation, possibly being equal.

  • 2
    Ah, nice example. You'd probably also get this if you FPU was set to treat denormals as zero (`FTZ` or `DAZ` or some equivalent thereof). You could easily get hit by this on ARM neon, since it doesn't support denormals. – EOF Aug 16 '16 at 17:10
12

The short answer: No, you cannot safely use -ffast-math except on code designed to be used with it. There are all sorts of important constructs for which it generates completely wrong results. In particular, for arbitrarily large x, there are expressions with correct value x but which will evaluate to 0 with -ffast-math, or vice versa.

As a more relaxed rule, if you're certain the code you're compiling was written by someone who doesn't actually understand floating point math, using -ffast-math probably won't make the results any more wrong (vs. the programmer's intent) than they already were. Such a programmer will not be performing intentional rounding or other operations that badly break, probably won't be using nans and infinities, etc. The most likely negative consequence is having computations that already had precision problems blow up and get worse. I would argue that this kind of code is already bad enough that you should not be using it in production to begin with, with or without -ffast-math.

From personal experience, I've had enough spurious bug reports from users trying to use -ffast-math (or even who have it buried in their default CFLAGS, uhg!) that I'm strongly leaning towards putting the following fragment in any code with floating point math:

#ifdef __FAST_MATH__
#error "-ffast-math is broken, don't use it"
#endif

If you still want to use -ffast-math in production, you need to actually spend the effort (lots of code review hours) to determine if it's safe. Before doing that, you probably want to first measure whether there's any benefit that would be worth spending those hours, and the answer is likely no.

Update several years later: As it turns out, -ffast-math gives GCC license to make transformations that effectively introduced undefined behavior into your program, causing miscompilation with arbitraryily-large fallout. See for example PR 93806 and related bugs. So really, no, it's never safe to use.

R.. GitHub STOP HELPING ICE
  • 208,859
  • 35
  • 376
  • 711
  • Another class of programs where it's generally safe are those programs that process uncontrolled floating-point input signals. Algorithms that are stable with regard to such inputs also tend to be stable against the numerical errors introduced by `-ffast-math`. But the type of engineers that choose stable algorithms also tends to grok what `-ffast-math` does, and [deals with it](http://stackoverflow.com/questions/24346957/how-to-set-icc-attribute-fp-model-precise-for-a-single-function-to-prevent-as) – MSalters Aug 16 '16 at 22:32
  • 1
    @MSalters: "Stability against numerical errors" is by no means a sufficient condition for `-ffast-math` to be safe. That would be sufficient if `-ffast-math` just produced small relative errors, but as I mentioned it can produce arbitrarily large error (e.g. in things like `a+b-b`). Also if you're processing uncontrolled inputs it's likely that infinities and nans may occur in your computations; if the compiler assumes they don't, bad things could happen. – R.. GitHub STOP HELPING ICE Aug 17 '16 at 02:36
  • @R.. Ironically, you chose the example where `-ffast-math` may actually save you from introducing imprecision. [If you have `-ffast-math` on and the compiler can see the expression as `a+b-b`, then can actually apply associative math and reduce it to `a`.](https://godbolt.org/g/bMT377) – KevinZ Aug 17 '16 at 06:14
  • 3
    @KevinZ: You might want to have a peek at Kahan summation; the non-associative math can be intentional. – MSalters Aug 17 '16 at 07:40
  • Optimizing `a+b-b` to `a` is **not** an example of introducing an arbitrarily large error. It amounts to computing `a + b` with arbitrary precision, something that compilers are already allowed to do **without** `-ffast-math` as a floating-point expression contraction (compiler authors seem to have decided that it was alright for them to have `FP_CONTRACT` default to `on`—or some variation of `on` that they call `fast`, I don't even want to know—and this even if you do not use `-ffast-math`). – Pascal Cuoq Aug 17 '16 at 09:23
  • The distance between the result of `a + b` under IEEE 754 and the arbitrary-precision version is a small relative error with respect to `a + b`. It is only “arbitrarily large” in the sense that `a + b` can be made arbitrarily large, but that's so obvious as to be meaningless. Relative errors introduced, or as it were, not introduced, are relative to each intermediate computation, not to the end result. This has always been how “relative error” works. – Pascal Cuoq Aug 17 '16 at 09:24
  • @MSalters Of course I know it is intentional, but outside of hard science, the probability of people caring about precision beyond 5 decimal digits is very low. – KevinZ Aug 17 '16 at 14:16
  • @PascalCuoq That's incorrect, associative fp requires you to ignore signaling NaNs, trapping, and the sign of 0. I suggest you go try out the example I linked above. – KevinZ Aug 17 '16 at 14:22
  • @KevinZ I have no idea what “that” is in “that's incorrect”. But I'm not really interested. I was commenting on R..'s reaction to MSalters' remark that numerically stable algorithms are fine, pointing out in essence that `a+b-b` is not a numerically stable algorithm—it suffers from catastrophic cancellation—in the conditions in which R.. claims that it “produce arbitrarily large error”, and thus not a counterexample to that claim. – Pascal Cuoq Aug 17 '16 at 15:18
  • @PascalCuoq: I forgot about `FP_CONTRACT`, which, yes, is just about as bad as `-ffast-math`. But with it disabled, as it generally should be, computing `a+b` with arbitrary precision is not permitted. `a+b` is defined to be correctly rounded in the actual (not nominal; possibly excess-precision) type used. `a+b-b` is not intended to be "numerically stable". It's intended to perform the rounding operation that IEEE 754 defines it to perform. – R.. GitHub STOP HELPING ICE Aug 17 '16 at 16:11
  • 1
    @PascalCuoq: Also, even with `FP_CONTRACT` left on, it only applies within single expressions. So something like `double_t tmp = a+b; tmp-b;` is still well-defined. See https://gcc.gnu.org/bugzilla/show_bug.cgi?id=37845 – R.. GitHub STOP HELPING ICE Aug 17 '16 at 16:17
  • @PascalCuoq: Ah, I see why "numerically stable" was relevant - you were referring to MSalters' comment and my response to it. So indeed his comment may be correct. – R.. GitHub STOP HELPING ICE Aug 17 '16 at 16:19
9

Given that IEEE 754 floating point has rounding errors, the assumption is that you are already living with inexact calculations.

The question you should answer is not whether the program expects inexact computations (it had better expect them, or it will break with or without -ffast-math), but whether the program expects approximations to be exactly those predicted by IEEE 754, and special values that behave exactly as predicted by IEEE 754 as well; or whether the program is designed to work fine with the weaker hypothesis that each operation introduces a small unpredictable relative error.

Many algorithms do not make use of special values (infinities, NaN) and are designed to work well in a computation model in which each operation introduces a small nondeterministic relative error. These algorithms work well with -ffast-math, because they do not use the hypothesis that the error of each operation is exactly the error predicted by IEEE 754. The algorithms also work fine when the rounding mode is other than the default round-to-nearest: the error in the end may be larger (or smaller), but a FPU in round-upwards mode also implements the computation model that these algorithms expect, so they work more or less identically well in these conditions.

Other algorithms (for instance Kahan summation, “double-double” libraries in which numbers are represented as the sum of two doubles) expect the rules to be respected to the letter, because they contain smart shortcuts based on subtle behaviors of IEEE 754 arithmetic. You can recognize these algorithms by the fact that they do not work when the rounding mode is other than expected either. I once asked a question about designing double-double operations that would work in all rounding modes (for library functions that may be pre-empted without a chance to restore the rounding mode): it is extra work, and these adapted implementations still do not work with -ffast-math.

Community
  • 1
  • 1
Pascal Cuoq
  • 79,187
  • 7
  • 161
  • 281
8

Yes, you can use -ffast-math on normal projects, for an appropriate definition of "normal projects." That includes probably 95% of all programs written.

But then again, 95% of all programs written would not benefit much from -ffast-math either, because they don't do enough floating point math for it to be important.

John Zwinck
  • 239,568
  • 38
  • 324
  • 436
  • 6
    Gotta love the "95% of all programs written would not benefit much from -ffast-math either,". Why go against IEEE unless demonstrable benefits? – chux - Reinstate Monica Aug 16 '16 at 15:56
  • 3
    -1 the claims are not justified and no useful information is given for making an informed decision. If the class of programs being considered covers those that don't actually do any math, the claim is probably true but not very interesting because even if only 5% of all programs break, that might be 25-50% of all programs doing significant math. – R.. GitHub STOP HELPING ICE Aug 16 '16 at 18:11
-5

Yes, they can be used safely, provided that you know what you are doing. This implies that you understand that they represent magnitudes, not exact values. This means:

  1. You always do a sanity check on any external fp input.
  2. You never divide by 0.
  3. You never check for equality, unless you know it is an integer with an absolute value below the max value of the mantissa.
  4. etc.

In fact, I would argue the converse. Unless you are working in very specific applications where NaNs and denormals have meaning, or if you really need that tiny incremental bit of reproduceability, then -ffast-math should be on by default. That way, your unit tests have a better chance of flushing out errors. Basically, whenever you think fp calculations have either reproduceability or precision, even under ieee, you are wrong.

KevinZ
  • 3,036
  • 1
  • 18
  • 26
  • 3
    This answer, and especially the last sentence, are factually wrong. Each elementary arithmetic operation has a single well-defined result for the actual types (see: excess precision and `double_t`, etc.) and rounding modes involved. There are many applications where you need this well-definedness not just for ideal precision but for the results to be meaningful *at all*. – R.. GitHub STOP HELPING ICE Aug 16 '16 at 20:09
  • @R.. You are so wrong, FP is fundamentally imprecise. For x86, `a+b != a+b`, if the values for the first addition came from the fp registers and the second addition had the values loaded from memory. The solution, is to force a load from memory every time: `-ffloat-store`. Of course there are _many_ applications where ieee semantics matter, but they are probably less than 5% of the use cases. – KevinZ Aug 17 '16 at 06:05
  • 1
    Nope. Your example is a non-conforming compiler behavior of some buggy compilers. GCC does not do it in standards-conforming mode since 4.6.something. `-ffloat-store` is not a correct workaround for this in older versions; it creates other incorrect behaviors. – R.. GitHub STOP HELPING ICE Aug 17 '16 at 16:06
  • 2
    @KevinZ That's exactly the kind of behaviour that the compiler is permitted to do with `-ffast-math` that isn't actually allowed otherwise! The IEEE standard indicates the same value is returned for the same operations with the same inputs and settings *every time*. If you tell your compiler not to follow it, then of course things like this can happen. – Jordan Melo Aug 18 '16 at 19:20
  • @JordanMelo Well, the `a+b != a+b` behavior isn't part of `-ffast-math`. `a+b != a+b` is actually default behavior unless you turn on `-float-store`. Neither GCC nor clang, and especially not ICC, cares much about ieee compliance. That brings us back to the original claim that most users don't care, which actually answers the question posed in the title. – KevinZ Aug 20 '16 at 04:12