3
double d = 0.0;
    for (int i = 0; i < 10; i++)
    {
        d = d+0.1;
    }
    System.out.println(d);

This is an example I read somewhere on "Principle of Least Surprise"

I was just curious on why the code would return a 0.999999999 and if I change the datatype of d to float, i get a 1.0000001. What is the reason behind such behavior.

Wizard
  • 1,154
  • 2
  • 14
  • 41
  • Completely unrelated, but burn `d = d + 0.1` with fire. There's an operator for that. – Mike G Mar 21 '13 at 20:32
  • Are the results same on different machines? Its interesting anyway. – Sudhanshu Umalkar Mar 21 '13 at 20:32
  • An old chestnut. People howled when they typed "*? 2 + 2 [Enter]*" on the **Atari 800** and got 3.99999. Yes, it did its own floating point math in the Basic interpreter. Children look at me cross-eyed when I write "*i = i + 1*" on a board also. "**How can i be equal to i + 1?!**" they shout. So surprising. So unintuitive - that's programming! –  Aug 04 '15 at 14:10

1 Answers1

7

This is a classic case of imprecision in floating point numbers. Since 0.1 can't be represented cleanly in binary (it is a repeating number), there is rounding error as the number is added to itself over and over. The difference in behavior when we change to a float just comes down to a difference in how 0.1 actually is held over the course of more bits of storage.

If you need really accurate representation of decimal numbers, the BigDecimal class will quickly become your best friend. Based on how it internally stores the details of your decimal without loss of precision, computations maintain their integrity.

Mike Clark
  • 11,769
  • 6
  • 39
  • 43
  • I'll add that check out http://docs.oracle.com/javase/tutorial/java/nutsandbolts/datatypes.html (and related link http://docs.oracle.com/javase/specs/jls/se7/html/jls-4.html#jls-4.2.3) would be helpful. – SirPentor Mar 21 '13 at 20:35