0

Why floating point numbers are accurate while arithmetic operations are not? I mean doesn't compiler rounds to the nearest number to represent double numbers not only arithmetic operation results?

double a = 0.1;
double b = 0.2;
System.out.println(a);
System.out.println(b);
System.out.println(a + b);

outputs;

0.1
0.2
0.30000000000000004

What I expected;

0.10000000000000001
0.20000000000000003
0.30000000000000004

EDIT: What is the difference between these operations;

double a = 0.3;
double b = 0.1 + 0.2;
System.out.println(a);   //0.3
System.out.println(b);   //0.300000000000004
hellzone
  • 5,393
  • 25
  • 82
  • 148
  • 3
    1. please read *[What Every Computer Scientist Should Know About Floating-Point Arithmetic](https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html)*. 2. floating point may be *precise* but it is not *accurate*. – Richard Jul 13 '18 at 07:12
  • @Richard I know 0.3 cant be represent correctly with binary numbers but I don't get how it prints 0.3 when I define a double number with 0.3 as value. How it stores this number? – hellzone Jul 13 '18 at 07:17
  • What is the reasoning behind your expected output? – Yunnosch Jul 13 '18 at 07:19
  • Are you really asking: why does Java sometimes round floating point values and at other times does not? – Richard Jul 13 '18 at 07:35
  • 1
    @Richard yes. When I print 0.1 I expected to see a number like 0.10000002 as output. I mean it can't represent 0.1 as binary number but it prints 0.1 correctly. – hellzone Jul 13 '18 at 07:38
  • Contrary to what is stated in the accepted answer, it's not because the bit pattern is *different*. `println` finds the *shortest possible* decimal that, when converted back into binary, would give *the same binary representation*. That is how they can round `1.00000000000000005551115123126E-1` to `0.1` on output. Due to the inaccuracies involved in converting 0.1 and 0.2 to binary, and then adding them up, the resulting bit pattern is **closer** to decimal `0.30000000000000004` than it is to `0.3`, and thus the output routines cannot round it to `0.3`, because that would be a rounding error. – DevSolar Jan 08 '22 at 22:57
  • (ctd.) So yes, when `println` shows you `0.1`, it is telling you a "white lie": That is not the *closest* decimal approximation to the binary value, but the *shortest* (which is usually what you expect to see). After a couple of operations, inaccuracies can add up to the point where this "rounding trick" stops working as expected. In any case a good implementation will ensure that the decimal-binary-decimal round-trip of values does not introduce errors. (That was not always the case, this is a surprisingly young art, the seminal paper being published as recently as 1990.) – DevSolar Jan 08 '22 at 23:05

1 Answers1

4

When converting the literals (for example with this converter) you will see that the bit representation is the most precise one. However the actual stored value slightly differs which leads to:

0.1 = 1.00000000000000005551115123126E-1
0.2 = 2.00000000000000011102230246252E-1 and
0.3 = 2.99999999999999988897769753748E-1

And as you can see the sum of the literal's representation does add up to 0.30000000000000004 but will not produce the same bit pattern as the literal for 0.3 and therefore the outputted value is not treated equally to the literal of 0.3.

L.Spillner
  • 1,772
  • 10
  • 19