1

Does anyone have an explanation for this strange rounding in haskell (GHCi, version 7.2.1). Everything seems fine unless I multiply with 100.

*Main> 1.1 
1.1

*Main> 1.1 *10
11.0

*Main> 1.1 *100
110.00000000000001

*Main> 1.1 *1000
1100.0

*Main> 1.1 *10000
11000.0

Edit: what is puzzeling me is that the rounding error only shows when multiplying with 100.

Edit(2): The comments I received made me realize, that this it totally unrelated to haskell, but a general issue with floating point numbers. Numerous questions were already asked (and answered) about floating-point number oddities, where the undelying issue typcally was confusing floats with real numbers.

Perl, python, javascript and C all report 1.1 * 100.0 = 110.00000000000001. Here is what C does

double     10.0 * 1.1 = 11.000000000000000000000000
double    100.0 * 1.1 = 110.000000000000014210854715
double          110.0 = 110.000000000000000000000000
double   1000.0 * 1.1 = 1100.000000000000000000000000

The question "why does this happen only when multiplying with 100" (even though there is a precise representation for 110.0) is still unanswered, but I suppose there is no simple answer, other than fully stepping through a floating-point multiplication (Thanks to Dax Fohl for stressing that 10 is nothing special in binary)

Martin Drautzburg
  • 5,143
  • 1
  • 27
  • 39
  • 3
    This question gets asked [over](http://stackoverflow.com/questions/588004/is-javascripts-floating-point-math-broken) and [over](http://stackoverflow.com/questions/7185512/why-0-10-2-0-3-5-5511151231258e-17-in-php) and [over](http://stackoverflow.com/questions/6027937/javascript-float-subtract). –  Aug 03 '13 at 10:15
  • 4
    I think martin is asking why it happens at 100 but not 1000 or 10000. I find this odd at first too. But (presumably) the reason is, multiplication by power-of-10 doesn't just shift digits; it goes through some binary multiplier and you end up with the mantissa and exponent that it gives you. Since 10 is nothing special in binary, you can end up with things that appear odd at first glance, like this. – Dax Fohl Aug 03 '13 at 13:58
  • [What every computer scientist should know about floating-point arithmetic](http://perso.ens-lyon.fr/jean-michel.muller/goldberg.pdf) – rampion Aug 03 '13 at 17:26
  • 1
    @rampion Thanks for posting a useless link to a 100-page document that does not directly address the question. – Pascal Cuoq Jul 05 '14 at 23:13

2 Answers2

4

The number 1.1 cannot be represented in finite form in binary. It looks like 1.00011001100110011...

"Rounding errors" are just mathematically inevitable with simple floating-point arithmetic. If you want accuracy, use a Decimal number type.

http://support.microsoft.com/kb/42980

Miklos Aubert
  • 4,405
  • 2
  • 24
  • 33
  • 3
    I understand to floats are not reals. My problem was, that I could not understand, why the error only shows when multiplying with 100 and not when multiplying with 10 or 1000. – Martin Drautzburg Aug 03 '13 at 13:18
  • 1
    It may seem like a simple matter of adding or subtracting a 0, but in the binary fraction that is represented by the floating pointer number, it is not nearly so simple. That is the essence of why you are seeing the behavior you are. Someone else linked to a great article about what comp sci's should know about floating-point numbers, and I highly recommend it. – John Wiegley Aug 04 '13 at 03:59
2

The question "why does this happen only when multiplying with 100" (even though there is a precise representation for 110.0) is still unanswered, but I suppose there is no simple answer, other than fully stepping through a floating-point multiplication

Well, I think there may be things one can say without going to the length of writing the binary multiplication, assuming IEEE 754 arithmetic and the (default) round-to-nearest rounding mode.

The double 1.1d is half a ULP from the real number 1.1. When you multiply it by 10, 100, 1000, and a few more powers of ten, you multiply by a number N that is exactly representable as a double, with the additional property that the result of the real multiplication 1.1 * N is exactly representable as a double, too. That makes 1.1 * N a good candidate for the result of the floating-point multiplication, which we'll write RN(N * 1.1d). But still the multiplication is not automatically rounded to 1.1 * N:

RN(N * 1.1d) = N * 1.1d + E1 with |E1| <= 0.5 * ULP(N*1.1d)

             = N * (1.1 + E2) + E1 with |E2| <= 0.5 * ULP(1.1)

             = N * 1.1 + (N * E2 + E1)

And the question now is how |N * E2 + E1| compares to ULP(N*1.1d), because since we have assumed N * 1.1 is exactly a floating-point number, if the result of the multiplication (which is also a floating-point number) is within 1 ULP of N * 1.1, it has to be N * 1.1.


In short, it is not so much what's special about 100… It is what's special about the real 1.1d * 100, which 1) is close to a power of two while being below it and 2) has an error of the same sign as the error when converting the real 1.1 to double.

Everytime the real N * 1.1d is relatively closer to the nearest inferior power of two than 1.1 is to 1, the result of the floating-point multiplication of 1.1d by N has to be exactly N * 1.1 (I think). An example of this case is N=1000, N*1.1d ~ 1100, just above 1024.

When the real N * 1.1d is relatively closer to the immediately superior power of two than 1.1 is to 2, there may be a floating-point number that represents N * 1.1d better than N * 1.1 does. But if the errors E1 and E2 compensate each other (i.e. have opposite signs), this should not happen.

Pascal Cuoq
  • 79,187
  • 7
  • 161
  • 281