There are lots of numbers that can be expressed exactly in decimal, but not exactly in binary. That is, they have a terminating decimal representation, but no terminating binary representation.
To understand this, consider the number 1/3. It doesn't have a terminating decimal representation - we can keep writing 0.3333333333333 for a while, but sooner or later, we have to stop, and we still haven't quite written 1/3.
The same thing happens when we try to write 2.14 in binary. It's 10.001000111... and a bunch more 0s and 1s that eventually start repeating, in the same way as 0.333333 repeats in decimal.
Now a double is just a binary number with 53 significant figures. So it can't store exactly 2.14, but it can get very close. Now see what happens when we start incrementing it.
2.14 = 10.001000111 .... (53 significant figures, 51 of them after the dot)
3.14 = 11.001000111 .... (53 significant figures, 51 of them after the dot)
4.14 = 100.001000111 ... (53 significant figures, 50 of them after the dot)
5.14 = 101.001000111 ... (53 significant figures, 50 of them after the dot)
So we didn't lose any accuracy when we went from 2.14 to 3.14, because the part after the dot didn't change. Likewise when we went from 4.14 to 5.14.
But when we went from 3.14 to 4.14, we lost accuracy, because we needed one extra digit before the dot, so we lost a digit after the dot.
Now Java has a complicated algorithm for figuring out how to display a floating point number. Basically, it picks the shortest decimal representation that's closer to the floating point number you're trying to represent, than to any other floating point number. That way, if you write double d = 2.14;
, then you'll get a floating point number that's SO CLOSE to 2.14, that it will always show up as 2.14 when you print it out.
But as soon as you start messing with the digits after the dot, the complexity of Java's printing algorithm kicks in - and the number can end up printed differently from how you expect.
So this won't happen when you increment a double
, but don't change the number of digits before the dot. It can only happen when you increment a double
past a power of 2; because this changes the number of digits before the dot.
To illustrate this, I ran this code.
for(int i = 0; i < 1000000000; i++) {
if ( i + 1 + 0.14 != i + 0.14 + 1 ) {
System.out.println(i + 0.14 + 1);
}
}
and got this output.
4.140000000000001
1024.1399999999999
2048.1400000000003
4096.139999999999
1048576.1400000001
2097152.1399999997
4194304.140000001
Observe that all these discrepant values are just past a power of two.