13

I have read somewhere that there is a source of non-determinism in C double-precision floating point as follows:

  1. The C standard says that 64-bit floats (doubles) are required to produce only about 64-bit accuracy.

  2. Hardware may do floating point in 80-bit registers.

  3. Because of (1), the C compiler is not required to clear the low-order bits of floating-point registers before stuffing a double into the high-order bits.

  4. This means YMMV, i.e. small differences in results can happen.

Is there any now-common combination of hardware and software where this really happens? I see in other threads that .net has this problem, but is C doubles via gcc OK? (e.g. I am testing for convergence of successive approximations based on exact equality)

phuclv
  • 37,963
  • 15
  • 156
  • 475
Lucas Membrane
  • 359
  • 2
  • 7
  • This might be relevant: http://stackoverflow.com/questions/17030230/is-specifying-floating-point-type-sufficient-to-guarantee-same-results – Fabian Schuiki Jun 21 '14 at 08:45
  • 1
    In general I don't think this is a compiler issue at all. If GCC inserts an FP assembly instruction for a 64 bit double, different CPUs will load that into their FPUs in a different way. Intel devices have a tendency to calculate in >=80bit precision, but this may vary greatly. So I wouldn't even assume that the same binary produces the same results on two different machines, as the hardware optimization performed is transparent but still messes up your result. – Fabian Schuiki Jun 21 '14 at 08:48
  • 2
    @Fabian The other question is about repeatability across platforms, which is a stronger constraint than repeatability across different runs on the same platform. – user4815162342 Jun 21 '14 at 08:48
  • 4
    I believe the answer is yes. Which implies even `x == x` is not guaranteed to return true (and this has nothing to do with infinity or NaN's, it's because the compiler might load the same variable into an 80-bit register once, and a 64-bit register or memory another time). – user541686 Jun 21 '14 at 08:48
  • All computers are deterministic. One input - one output - always unless the thing is broken – Ed Heal Jun 21 '14 at 08:49
  • 2
    @EdHeal: A program is not a computer. – user541686 Jun 21 '14 at 08:49
  • 1
    @EdHeal They are indeed deterministic. But not predictable, since different CPUs produce different results due to their implementation. And as Mehrdad mentioned, even the same CPU may produce slightly different results each time the computation is run. – Fabian Schuiki Jun 21 '14 at 08:51
  • @Mehrdad - Universal Turing Machine. The electronics just run it. – Ed Heal Jun 21 '14 at 08:52
  • 6
    @EdHeal: Just to prove you're not the only person in the world who knows how to be uselessly pedantic, you're [actually](https://en.wikipedia.org/wiki/Hardware_random_number_generator) [wrong](https://en.wikipedia.org/wiki/RdRand). – user541686 Jun 21 '14 at 08:54
  • @Fabian - This is not true - And I nope not - Bung my card into the machine does it say I cay have £10 or not. Roll the dice (in your world) I might get supper tonight or not. This is obviously not true. We depend on computers to be predicable. – Ed Heal Jun 21 '14 at 08:57
  • 1
    @Fabian The result is determined by IEEE-754 as long as only 64 bit representation is used (and configuration w.r.t. for example rounding is the same etc. etc.). – starblue Jun 21 '14 at 09:06
  • Please explain how something can be deterministic but not predictable. – Ed Heal Jun 21 '14 at 09:09
  • 1
    @EdHeal: Please explain what exactly you're hoping to get out of your conversation. – user541686 Jun 21 '14 at 09:12
  • 1
    Using 64, 80, or 128 bit precision floating point, DO NOT imply any given accuracy for an algorithmn. Errors can accumulate, so an assessment of the required accuracy should be made and convergence test made accordingly abs(delta) <= x. If you want exact algorithmic results use integers or fixed point arithmetic not FP! – Rob11311 Jun 21 '14 at 11:33
  • 1
    @Mehrdad - If it is deterministic it is predictable. Those words mean the same concept. – Ed Heal Jun 21 '14 at 15:41
  • 2
    floating point arithmetic is normally not allowed to be associative to be deterministic. However, in parallel programming with threads or with SIMD a loser floating point model is often used to allow associative floating point arithmetic. There is not guarantee then that you will get the same result on every system. – Z boson Jun 23 '14 at 11:51

3 Answers3

12

The behavior on implementations with excess precision, which seems to be the issue you're concerned about, is specified strictly by the standard in most if not all cases. Combined with IEEE 754 (assuming your C implementation follows Annex F) this does not leave room for the kinds of non-determinism you seem to be asking about. In particular, things like x == x (which Mehrdad mentioned in a comment) failing are forbidden since there are rules for when excess precision is kept in an expression and when it is discarded. Explicit casts and assignment to an object are among the operations that drop excess precision and ensure that you're working with the nominal type.

Note however that there are still a lot of broken compilers out there that don't conform to the standards. GCC intentionally disregards them unless you use -std=c99 or -std=c11 (i.e. the "gnu99" and "gnu11" options are intentionally broken in this regard). And prior to GCC 4.5, correct handling of excess precision was not even supported.

R.. GitHub STOP HELPING ICE
  • 208,859
  • 35
  • 376
  • 711
  • Isn't IEEE754 broken by gcc only if you ask for `-ffast-math`? – Matteo Italia Jun 21 '14 at 13:21
  • `-fexcess-precision=fast` is on by default in the non-standards-conforming profiles, including the default one (gnu89 or gnu99 or whatever it is now). – R.. GitHub STOP HELPING ICE Jun 21 '14 at 14:52
  • Just checked right now, you are right. Fortunately on AMD64 it uses SSE by default, so there's no excess precision to deal with. – Matteo Italia Jun 21 '14 at 15:26
  • I wish I could up vote this answer more than once. Floating point numerics is _subtle_, but that's **hugely** different from nondeterministic. Floating point numerics is used for designing bridges, buildings, and airplanes; you can imagine that non-determinism would not be viewed as helpful in these disciplines. – Jonathan Dursi Jun 22 '14 at 15:21
  • @JonathanDursi How a given source code is translated is not "subtle", it's non deterministic. – curiousguy Jun 26 '19 at 13:09
3

This may happen on Intel x86 code that uses the x87 floating-point unit (except probably 3., which seems bogus. LSB bits will be set to zero.). So the hardware platform is very common, but on the software side use of x87 is dying out in favor of SSE.

Basically whether a number is represented in 80 or 64 bits is at the whim of the compiler and may change at any point in the code. With for example the consequence that a number which just tested non-zero is now zero. m)

See "The pitfalls of verifying floating-point computations", page 8ff.

starblue
  • 55,348
  • 14
  • 97
  • 151
1

Testing for exact convergence (or equality) in floating point is usually a bad idea, even with in a totally deterministic environment. FP is an approximate representation to begin with. It is much safer to test for convergence to within a specified epsilon.

JRobert
  • 159
  • 2
  • 8
  • 1
    Yes but for any test, the non determinism means that you never know if passing the test will happen another time. Very disturbing! – curiousguy Jun 27 '19 at 14:14