After 10 years I'm (re)studying Java and I'm writing simple code in order to check syntax and verify simple behaviors. I was playing with number types and I don't understand why floats behave differently in Java (and also C# as I verified later) than Python (or other scripting language like JavaScript or PHP). The point is that from my knowledge, floats are unreliable when precision matters, and one of the simplest example I had in mind is that the sum:
float(0.1) + float(0.2)
differently from one could expect is not float(0.3)
, but float(0.30000000000000004)
due to "rounding issues behind the scenes".
So, in my dummy Java code I wrote:
float wrongResult = 0.1f + 0.2f;
if (wrongResult != 0.3f) {
System.out.println("float sucks");
}
else {
System.out.println("float works");
}
double rightResult = 0.1 + 0.2;
if (rightResult != 0.3) {
System.out.println("double sucks");
}
else {
System.out.println("double works");
}
But the output is surprisingly:
float works
double sucks
Which drives me crazy, because double is a 64-bit type and float is only a 32-bit type, so I would expect the opposite result since the double should be more precise. So my huge dilemma is: why scripting languages like Python, PHP and Javascript behaves in a way and compiled languages like Java and C# behaves differently?