1

Possible Duplicate:
java for-loop problem

Why is the output of following code:

for (float j2 = 0.0f; j2 < 10.0f; j2+=0.1f) {
    System.out.println(j2);
}

this:

0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.70000005
0.8000001
0.9000001
1.0000001
1.1000001
1.2000002
1.3000002
1.4000002
1.5000002
1.6000003
1.7000003
1.8000003
1.9000003
2.0000002
2.1000001
2.2
2.3
2.3999999
2.4999998
2.5999997
2.6999996
2.7999995
2.8999994
2.9999993
3.0999992
3.199999
3.299999
3.399999
3.4999988
3.5999987
3.6999986
3.7999985
3.8999984
3.9999983
4.0999985
4.1999984
4.2999983
4.399998
4.499998
4.599998
4.699998
4.799998
4.8999977
4.9999976
5.0999975
5.1999974
5.2999973
5.399997
5.499997
5.599997
5.699997
5.799997
5.8999968
5.9999967
6.0999966
6.1999965
6.2999964
6.3999963
6.499996
6.599996
6.699996
6.799996
6.899996
6.9999957
7.0999956
7.1999955
7.2999954
7.3999953
7.499995
7.599995
7.699995
7.799995
7.899995
7.9999948
8.099995
8.199995
8.299995
8.399996
8.499996
8.599997
8.699997
8.799997
8.899998
8.999998
9.099998
9.199999
9.299999
9.4
9.5
9.6
9.700001
9.800001
9.900002

Also one more question: even if I change condition of for loop to j2<=10.0f the output is the same. Why so? Shouldn't it include 10.0 in output?

Community
  • 1
  • 1
Harry Joy
  • 58,650
  • 30
  • 162
  • 207

3 Answers3

5

As others have mentioned it is about the precision of the floating point numbers and the fact that each step of manipulation could potentially increase the error.

This is why you never use floats to represent money. Check out BigDecimal for exact decimal handling.

The reason that 10.0 is not included is that the sum has floated and no longer is <= 10.0 since the precision/rounding error has drifted above +0.0. 9.900002+0.1 will probably yield something like 10.000002 which is > 10.0.

Floats are not represented as exact numbers but rather as a coefficient and an exponent such as (−1)^s × c × b^q and adding to floats will combine uncertainty from both numbers. This might translate to 0.70000005 instead of 0.7 due to loss of precision.

Check out http://en.wikipedia.org/wiki/IEEE_754-2008

Sebastian Olsson
  • 836
  • 6
  • 10
4

Both double and float have rounding error from calculations and representation errors (for values its cannot represent exactly) Using float has much more error than using double which is a good reason to avoid it. The error can be about 10^8 times greater.

When you print a floating point number, it will perform the a small amount of rounding to hide this error from you. The error can easily be large enough that is it visible.

For this reason it is usually a good idea to round the result yourself by defining the accuracy you want.

for (float j2 = 0.0f; j2 < 10.05f; j2+=0.1f)
    System.out.printf("%.1f ", j2);

0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 3.0 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 4.0 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 5.0 5.1 5.2 5.3 5.4 5.5 5.6 5.7 5.8 5.9 6.0 6.1 6.2 6.3 6.4 6.5 6.6 6.7 6.8 6.9 7.0 7.1 7.2 7.3 7.4 7.5 7.6 7.7 7.8 7.9 8.0 8.1 8.2 8.3 8.4 8.5 8.6 8.7 8.8 8.9 9.0 9.1 9.2 9.3 9.4 9.5 9.6 9.7 9.8 9.9 10.0

You can see the exact representation of 0.1f using BigDecimal. Interestingly 0.1f and 0.1 are not the same value.

System.out.println("0.1f = " + new BigDecimal(0.1f));
System.out.println("0.1 = " + new BigDecimal(0.1));
System.out.println("0.1f - 0.1 = " + (0.1f - 0.1) + " or " + new BigDecimal(0.1f - 0.1));

prints

0.1f = 0.100000001490116119384765625
0.1 = 0.1000000000000000055511151231257827021181583404541015625
0.1f - 0.1 = 1.4901161138336505E-9 or 1.4901161138336505018742172978818416595458984375E-9

As has been noted, you cannot use floating point alone for money (or many other purposes) without sensible rounding. If performance and readability of code is not an issue, use BigDecimal instead.

Peter Lawrey
  • 525,659
  • 79
  • 751
  • 1,130
1

From the Floating-Point Guide:

Why don’t my numbers, like 0.1 + 0.2 add up to a nice round 0.3, and instead I get a weird result like 0.30000000000000004?

Because internally, computers use a format (binary floating-point) that cannot accurately represent a number like 0.1, 0.2 or 0.3 at all.

When the code is compiled or interpreted, your “0.1” is already rounded to the nearest number in that format, which results in a small rounding error even before the calculation happens.

Michael Borgwardt
  • 342,105
  • 78
  • 482
  • 720