-1

After 10 years I'm (re)studying Java and I'm writing simple code in order to check syntax and verify simple behaviors. I was playing with number types and I don't understand why floats behave differently in Java (and also C# as I verified later) than Python (or other scripting language like JavaScript or PHP). The point is that from my knowledge, floats are unreliable when precision matters, and one of the simplest example I had in mind is that the sum:

float(0.1) + float(0.2)

differently from one could expect is not float(0.3), but float(0.30000000000000004) due to "rounding issues behind the scenes". So, in my dummy Java code I wrote:

float wrongResult = 0.1f + 0.2f;
if (wrongResult != 0.3f) {
    System.out.println("float sucks");
}
else {
    System.out.println("float works");
}

double rightResult = 0.1 + 0.2;
if (rightResult != 0.3) {
    System.out.println("double sucks");
}
else {
    System.out.println("double works");
} 

But the output is surprisingly:

float works
double sucks

Which drives me crazy, because double is a 64-bit type and float is only a 32-bit type, so I would expect the opposite result since the double should be more precise. So my huge dilemma is: why scripting languages like Python, PHP and Javascript behaves in a way and compiled languages like Java and C# behaves differently?

daveoncode
  • 18,900
  • 15
  • 104
  • 159
  • Possible duplicate of [Why not use Double or Float to represent currency?](https://stackoverflow.com/questions/3730019/why-not-use-double-or-float-to-represent-currency) – Scott Hunter Jul 16 '17 at 09:04
  • NOPE! My question is totally different! – daveoncode Jul 16 '17 at 09:07
  • 1
    Why should it surprise you that a less precise `0.1f+0.2f` might be exactly equal to a less precise `0.3f` ? It would indicate they've all been rounded in such a way that the same number is produced. – khelwood Jul 16 '17 at 09:11
  • so, why the rounding behavior is different in Python, PHP or JavaScript? – daveoncode Jul 16 '17 at 09:15
  • 1
    The rounding behaviour for `float` is that things are *more rounded*, and that means _less discriminating_. If you don't want that, use `double` – khelwood Jul 16 '17 at 09:16
  • I really don't understand the downvote on my question :( – daveoncode Jul 16 '17 at 09:20

1 Answers1

10

There is no difference between scripting languages and Java/C#. However, there is a difference between float and double. A small confusion comes from the fact that in the scripting languages (at least in Python) the underlying type of the float normally has 64 bits precision (i.e., a double in Java). The reason then, for the different behavior is that the closest value after rounding will not be the same, as can be seen from the following:

fl64(.1) == 0.10000000000000001
fl64(.2) == 0.20000000000000001
fl64(.3) == 0.29999999999999999
fl64(.1) + fl64(.2) == 0.30000000000000004

fl32(.1) == 0.1
fl32(.2) == 0.2
fl32(.3) == 0.30000001
fl32(.1) + fl32(.2) == 0.30000001

Thus, with the lower precision (32 bits) it just happens to be that 0.1 + 0.2 == 0.3. This however is not a general result and for many other numbers this will not hold. Thus floats are still unreliable for exact precision.

JohanL
  • 6,671
  • 1
  • 12
  • 26