4

I've heard that there are many problems with floats/doubles on different CPU's.

If i want to make a game that uses floats for everything, how can i be sure the float calculations are exactly the same on every machine so that my simulation will look exactly same on every machine?

I am also concerned about writing/reading files or sending/receiving the float values to different computers. What conversions there must be done, if any?

I need to be 100% sure that my float values are computed exactly the same, because even a slight difference in the calculations will result in a totally different future. Is this even possible ?

mskfisher
  • 3,291
  • 4
  • 35
  • 48
Rookie
  • 3,753
  • 5
  • 33
  • 33
  • Have a look at [this](http://stackoverflow.com/questions/6722293/why-comparing-double-and-float-leads-to-unexpected-result/6722297#6722297) – Alok Save Jul 29 '11 at 12:34
  • If slight differences to that calculation are a problem then floating point is probably a poor fit to begin with, consider fixed point/ or rationals or some other integer based aritmetic – jk. Jul 29 '11 at 12:41
  • Floats are not computed with a dash of randomness. a * b will be the same if a and b have the same values. – R. Martinho Fernandes Jul 29 '11 at 12:49
  • You might find this article helpful: [What Every Computer Scientist Should Know About Floating-Point Arithmetic](http://download.oracle.com/docs/cd/E19957-01/806-3568/ncg_goldberg.html) – Ferdinand Beyer Jul 29 '11 at 12:35

2 Answers2

2

Standard C++ does not prescribe any details about floating point types other than range constraints, and possibly that some of the maths functions (like sine and exponential) have to be correct up to a certain level of accuracy.

Other than that, at that level of generality, there's really nothing else you can rely on!

That said, it is quite possible that you will not actually require binarily identical computations on every platform, and that the precision and accuracy guarantees of the float or double types will in fact be sufficient for simulation purposes.

Note that you cannot even produce a reliable result of an algebraic expression inside your own program when you modify the order of evaluation of subexpressions, so asking for the sort of reproducibility that you want may be a bit unrealistic anyway. If you need real floating point precision and accuracy guarantees, you might be better off with an arbitrary precision library with correct rounding, like MPFR - but that seems unrealistic for a game.

Serializing floats is an entirely different story, and you'll have to have some idea of the representations used by your target platforms. If all platforms were in fact to use IEEE 754 floats of 32 or 64 bit size, you could probably just exchange the binary representation directly (modulo endianness). If you have other platforms, you'll have to think up your own serialization scheme.

Kerrek SB
  • 464,522
  • 92
  • 875
  • 1,084
  • so in a nutshell: if i want to use floats in a (network)game, i must write my own floating point datatype that ensures it will work exactly the same on all machines, while reducing the efficiency of computation too? – Rookie Jul 29 '11 at 17:21
  • @Rookie: Well, if you really wanted the code to work on _any_ platform (not just _every_ platform), then yes, but realistically you can probably be less punishing to yourself and just assume that all your target platforms use IEEE floats. – Kerrek SB Jul 29 '11 at 17:25
  • IEEE floats always work exactly the same on every CPU? even if its intel CPU ( http://en.wikipedia.org/wiki/Pentium_FDIV_bug ) – Rookie Jul 29 '11 at 21:27
  • The float representation is fixed by the IEEE standard, yes. But floating point *operations* needn't necessarily, at least at the higher level of algorithms and library functions. – Kerrek SB Jul 29 '11 at 23:28
  • ah ok so... i have no way of ensuring my simulation will have the same end result on different CPU's then... im doing millions of additions and divisions, and if theres some difference in the results of the calculations, the end result wont be same anymore, which is essential for my application. So how does any game overcome this problem ? do they just round the results every now and then to avoid possible differences in the calculations? What is the biggest difference two machines could have in their calculations? i could use that to determine which intervals i should round the values. – Rookie Jul 29 '11 at 23:39
  • Right. Well, the actual, individual CPU instructions like add and multiply *might* be standardized by IEEE 754, too, but there's no way that'll survive into your program. If you really must have *identical* results, you should use a library, like MPFR. But very seriously, do you really need this? Using software floats will be many orders of magnitude slower. Oh, if you aren't multiplying a lot, you might consider fixed-point numbers! – Kerrek SB Jul 29 '11 at 23:48
  • yes, i really need that :/ and i am dividing a lot. i just wonder, if i cant rely on floats, how do other games work flawlessly with them online? – Rookie Jul 30 '11 at 10:29
1

What every programmer should know: http://docs.sun.com/source/806-3568/ncg_goldberg.html

Zharf
  • 2,638
  • 25
  • 26