2

I ran into a floating point error today (verified in JavaScript and Python) that seems peculiar.

> 15 * 19.22
288.29999999999995

What really weird about this scenario is that these numbers are well within the range of numbers representable by floating point. We're not dealing with really large or really small numbers.

In fact, if I simply move the decimal points around, I get the correct answer without rounding error.

> 15 * 19.22
288.29999999999995
> 1.5 * 192.2
288.29999999999995
> .15 * 1922.
288.3
> 150 * 1.922
288.3
> 1500 * .1922
288.3
> 15 * 1922 / 100
288.3
> 1.5 * 1.922 * 100
288.3
> 1.5 * .1922 * 1000
288.3
> .15 * .1922 * 10000
288.3

Clearly there must be some intermediate number that isn't representable with floating point, but how is that possible?

Is there a "safer" way of multiplying floating point numbers to prevent this issue? I figured that if the numbers were of the same order of magnitude, then floating point multiplication would work the most accurately but clearly that is a wrong assumption.

Chet
  • 18,421
  • 15
  • 69
  • 113
  • In short: shifting the decimal point changes the (binary!) exponent, so gaps between two subsequent values of the significand will result in larger or smaller differences (depending on the direction of the shift). If the error would be static, you could simply calculate everything with a high exponent and only in the end reduce it. Say, the error were a static 1e-20. Then you could calculate with exponents near 1e+100 and the error would be entirely negligible.But that is not possible. The error will always be somewhere in the lower bits of the significand. Adding 1 to the exponent doubles it. – Rudy Velthuis Oct 18 '18 at 20:15
  • 1
    I think this is really just another version of the most common dup on stackoverflow: [is floating point math broken?](https://stackoverflow.com/q/588004/238704). However, normally this would have been closed as a dup a long time ago, so I'm a little confused and will hold off voting to close. – President James K. Polk Oct 18 '18 at 23:56
  • 1
    @JamesKPolk This question has a bit of a twist, so I agree with not voting to close. I also thought that it would be an easy close-vote based on the title, but then refrained from voting to close it. – John Coleman Oct 19 '18 at 00:31

3 Answers3

3

Why does floating point error change based on the position of the decimal?

Because you're working in base 10. IEEE-754 double-precision binary floating point works in binary (base 2). In that representation, for instance, 1 can be represented exactly, but 0.1 cannot.¹

What really weird about this scenario is that these numbers are well within the range of numbers representable by floating point. We're not dealing with really large or really small numbers.

As you can see from my statement above, even just going to tenths, you run into imprecise numbers without having to go to outrageous values (as you do to get to unrepresentable whole numbers like 9,007,199,254,740,993). Hence the famous 0.1 + 0.2 = 0.30000000000000004 thing:

console.log(0.1 + 0.2);

Is there a "safer" way of multiplying floating point numbers to prevent this issue?

Not using built-in floating point. You might work only in whole numbers (since they're reliable from -9,007,199,254,740,992 through 9,007,199,254,740,992) and then when outputting, insert the relevant decimal. You might find this question's answers useful: How to deal with floating point number precision in JavaScript?.


¹ You may be wondering why, if 0.1 isn't represented exactly, console.log(0.1) outputs "0.1". It's because normally with floating point, when converting to string, only enough digits are output to differentiate the number from its nearest representable neighbor. In the case of 0.1, all that's needed is "0.1". Converting binary floating point to representable decimal is quite complicated, see the various notes and citations in the spec. :-)

T.J. Crowder
  • 1,031,962
  • 187
  • 1,923
  • 1,875
  • "only enough digits are output to differentiate the number from its nearest representable neighbor" -- interesting. I think I figured out a satisfying answer. Floating point is basically scientific notation in base 2. However 0.1 * 0.2^n will always have a decimal remainder. So you will never have a whole number that you can represent in binary! Thus you have a good suggestion to make them whole numbers. – Chet Oct 18 '18 at 19:44
  • @Chet Yes, this form of floating point (there are others) is binary with a mantissa and an exponent. *"So you will never have a whole number that you can represent in binary! "* No, that's incorrect. As I said above, all of the whole numbers -9,007,199,254,740,992 through 9,007,199,254,740,992 (or there's an argument for -9,007,199,254,740,991 through 9,007,199,254,740,991) can be represented exactly. 9...992 is the first number at which the exponent is so large there's no bit for the binary 1s place (but it's an even number, so that's okay -- 9...993 can't be represented, though). – T.J. Crowder Oct 19 '18 at 06:07
1

It is because how binary floating point is represented. Take the values 0.8, 0.4, 0.2 and 0.1, as doubles. Their actually stored values are:

0.8 --> 0.8000000000000000444089209850062616169452667236328125
0.4 --> 0.40000000000000002220446049250313080847263336181640625
0.2 --> 0.200000000000000011102230246251565404236316680908203125
0.1 --> 0.1000000000000000055511151231257827021181583404541015625

As you can easily see, the difference with the exact decimal value halves each time you halve the number. That is because all of these have the exact same significand, and only a different exponent. This gets clearer if you look at their hex representations:

0.8 --> 0x1.999999999999ap-1
0.4 --> 0x1.999999999999ap-2
etc...

So the difference between the real, mathematical value and the actually stored value is somewhere in and under that last bit. That bit gets a smaller value the lower the exponent goes. And it goes up the other way: 1.6 is 0x1.999999999999ap+0, etc. The higher you go, the larger the value of that difference will become, because of that exponent. That is why it is called a relative error.

And if you shift the decimal point, you are in fact changing the binary exponent as well. Not exactly proportionally, because we are dealing with different number bases, but pretty much "equivalently" (if that is a proper word). The higher the number, the higher the exponent, and thus the higher the value of the difference between mathematical and floating point value becomes.

Rudy Velthuis
  • 28,387
  • 5
  • 46
  • 94
0

Not an answer, but a long comment.

The order of magnitude is not the guilty, as the floating-point representation normalizes the numbers to a mantissa between 1 and 2:

  15       = 1.875     x 2^3
  19.22    = 1.20125   x 2^4
 150       = 1.171875  x 2^7
   0.1922  = 1.5376    x 2^-3

and the exponents are processed separately.