Because floating point is not perfectly precise. You can end up with slight differences.
(Side note: I think you meant var x = .6 - .5;
Otherwise, you're comparing -0.1
with 0.1
.)
JavaScript uses IEEE-754 double-precision 64-bit floating point (ref). This is an extremely good approximation of floating point numbers, but there is no perfect way to represent all floating point numbers in binary.
Some discrepancies are easier to see than others. For instance:
console.log(0.1 + 0.2); // "0.30000000000000004"
There are some JavaScript libraries out there that do the "decimal" thing a'la C#'s decimal
type or Java's BigDecimal
. That's where the number is actually stored as a series of decimal digits. But they're not a panacea, they just have a different class of problems (try to represent 1 / 3
accurately with it, for instance). "Decimal" types/libraries are fantastic for financial applications, because we're used to dealing with the style of rounding required in financial stuff, but there is the cost that they tend to be slower than IEEE floating point.
Let's output your x
and y
values:
var x = .6 - .5;
console.log(x); // "0.09999999999999998"
var y = 10.2 - 10.1;
console.log(y); // "0.09999999999999964"
No great surprise that 0.09999999999999998
is !=
to 0.09999999999999964
. :-)
You can rationalize those a bit to make the comparison work:
function roundTwoPlaces(num) {
return Math.round(num * 100) / 100;
}
var x = roundTwoPlaces(0.6 - 0.5);
var y = roundTwoPlaces(10.2 - 10.1);
console.log(x); // "0.1"
console.log(y); // "0.1"
console.log(x === y); // "true"
Or a more generalized solution:
function round(num, places) {
var mult = Math.pow(10, places);
return Math.round(num * mult) / mult;
}
Live example | source
Note that it's still possible for accuracy crud to be in the resulting number, but at least two numbers that are very, very, very close to each other, if run through round
with the same number of places, should end up being the same number (even if that number isn't perfectly accurate).