0

Possible Duplicate:
Is JavaScript's Math broken?

Suppose,

var x = .6 - .5;

var y = 10.2 – 10.1;

var z = .2  -  .1;

Comparison result

x == y;     // false

x == .1;    // false

y == .1;    // false

but

z == .1;    // true

Why Javascript show such behavior?

Community
  • 1
  • 1
The System Restart
  • 2,873
  • 19
  • 28
  • 2
    Your first two comparisons are a bit strange. Why would you expect them to match? (-0.1 vs 0.1) – Mat May 12 '12 at 09:18
  • Related to: http://stackoverflow.com/questions/1458633/elegant-workaround-for-javascript-floating-point-number-problem – BasTaller May 12 '12 at 09:23

1 Answers1

6

Because floating point is not perfectly precise. You can end up with slight differences.

(Side note: I think you meant var x = .6 - .5; Otherwise, you're comparing -0.1 with 0.1.)

JavaScript uses IEEE-754 double-precision 64-bit floating point (ref). This is an extremely good approximation of floating point numbers, but there is no perfect way to represent all floating point numbers in binary.

Some discrepancies are easier to see than others. For instance:

console.log(0.1 + 0.2); // "0.30000000000000004"

There are some JavaScript libraries out there that do the "decimal" thing a'la C#'s decimal type or Java's BigDecimal. That's where the number is actually stored as a series of decimal digits. But they're not a panacea, they just have a different class of problems (try to represent 1 / 3 accurately with it, for instance). "Decimal" types/libraries are fantastic for financial applications, because we're used to dealing with the style of rounding required in financial stuff, but there is the cost that they tend to be slower than IEEE floating point.

Let's output your x and y values:

var x = .6 - .5;
console.log(x); // "0.09999999999999998"

var y = 10.2 - 10.1;
console.log(y); // "0.09999999999999964"

No great surprise that 0.09999999999999998 is != to 0.09999999999999964. :-)

You can rationalize those a bit to make the comparison work:

function roundTwoPlaces(num) {
  return Math.round(num * 100) / 100;
}

var x = roundTwoPlaces(0.6 - 0.5);

var y = roundTwoPlaces(10.2 - 10.1);

console.log(x);       // "0.1"
console.log(y);       // "0.1"
console.log(x === y); // "true"

Or a more generalized solution:

function round(num, places) {
    var mult = Math.pow(10, places);
    return Math.round(num * mult) / mult;
}

Live example | source

Note that it's still possible for accuracy crud to be in the resulting number, but at least two numbers that are very, very, very close to each other, if run through round with the same number of places, should end up being the same number (even if that number isn't perfectly accurate).

T.J. Crowder
  • 1,031,962
  • 187
  • 1,923
  • 1,875
  • Don't ever count on JavaScript's float number precision... =)) – benqus May 12 '12 at 09:19
  • @benqus: Enh, it's good enough for most purposes. And remember it's not just JavaScript; a lot of systems use IEEE-754, which is freaky smart bit fiddling. – T.J. Crowder May 12 '12 at 09:23
  • That last trick you do there is interesting mate! I mean that's crazy no matter what standard JS uses. =) I'm gonna save this solution if you don't mind. Does this work with smaller numbers, like: 0.00245? – benqus May 12 '12 at 09:55
  • 1
    @benqus: Yes, I've added a more generalized solution. There's no guarantee, of course, that dividing by 100 (or whatever) won't result in a similarly-awkward bit of crud in the result. But at least if you feed two numbers in that are very, very close to one another, they should (in theory) both come out with the same value (even if that value isn't perfectly accurate). – T.J. Crowder May 12 '12 at 09:59