8

Possible Duplicate:
Is JavaScript's Math broken?
Why can't decimal numbers be represented exactly in binary?

What will be result of next code:

if(0.3 == ( 0.1 + 0.1 + 0.1 ))
{
      alert(true);
}
else
{
      alert(false);
}

It is strange, but result will be false.

Reason is that result of

0.1+0.1+0.1

will be

0.30000000000000004

How can be explained this behavior?

Community
  • 1
  • 1
Anton
  • 9,682
  • 11
  • 38
  • 68
  • 2
    See http://stackoverflow.com/questions/1089018/why-cant-decimal-numbers-be-represented-exactly-in-binary – mtrw Sep 30 '11 at 09:08
  • 1
    And, more specifically in JavaScript, http://stackoverflow.com/questions/4088590/0-43-in-javascript-not-1-2-its-1-20000000002-what-happening – Paul D. Waite Sep 30 '11 at 09:15
  • 2
    Perhaps you should follow the advice in the FAQ and search before asking a question that's already been asked a gazillion times before. – paxdiablo Sep 30 '11 at 09:18
  • Sorry, I really tried to find it before asking, but use other keywords for search. – Anton Oct 01 '11 at 15:00

4 Answers4

2

It's the same reason 1/3 + 1/3 + 1/3 may not give you exactly 1 in decimal. If you use 1/3 as .33333333, then 1/3 + 1/3 + 1/3 will give you .9999999 which isn't exactly one.

Unless you know exactly what you're doing, don't compare non-integer numeric types for equality.

David Schwartz
  • 179,497
  • 17
  • 214
  • 278
1

The explanation is pretty simple - read about Floating Point Numbers Problems

Paul D. Waite
  • 96,640
  • 56
  • 199
  • 270
Tudor Constantin
  • 26,330
  • 7
  • 49
  • 72
1

It's due to the nature of floats stored in computers. There is no such thing as an exact way to store an arbitrary floating point number. What you typically do when you compare floats is to see if the difference is smaller than some small number epsilon, like this:

function equals(f1, f2){
  var epsilon = 0.00001; //arbitrary choice
  return (f1-f2 < epsilon && f2-f1 < epsilon);
}

so in your case, change if(0.3 == ( 0.1 + 0.1 + 0.1 )) to if(equals 0.3, (0.1 + 0.1 + 0.1))

Reason
  • 1,410
  • 12
  • 33
1

What you're experiencing is a basic floating point rounding error.

We can't precisely represent 0.1 without some error due to the nature of binary numbers. WolframAlpha reports decimal 0.1 to equal binary ~0.00011001100110011... Notice how it can't be finitely represented in the binary number system? This means we have to decide on a cut off point at which to stop calculating this number otherwise we'd be here forever.

This introduces an error. And this error has accumulated as the code adds the numbers together which results in an incredibly small quantity added to the end of your sum. This ensures that the sum will never be EXACTLY 0.3, which is what the IF test is looking for.

Some decimal numbers, however, can be represented accurately in binary such as dec 0.5 = bin 0.1 and dec 0.25 = bin 0.01.

We can demonstrate this similarly to your original code by using 0.5 = (0.25 + 0.25).


For further reading on this I recommend The Floating-Point Guide.

It provides a good overview of the concept of floating point numbers and how errors in calculation can arise. There is also a section on Javascript which demonstrates how to overcome the rounding errors you're experiencing.

Lewis Norton
  • 6,911
  • 1
  • 19
  • 29