-1

So I have read this answer here: Is floating point math broken?

That because every number in JS is double float 0.1+0.2 for example will NOT equal to 0.3.

But I don't understand why it never happens with integers? Why then 1+2 always equals 3 etc. It would seem that integers like 1 or 2 similarly to 0.1 and 0.2 don't have perfect representation in the binary64 so their math should also sometimes break but that never happens.

Why is that?

Scath
  • 3,777
  • 10
  • 29
  • 40
user3784950
  • 65
  • 10

2 Answers2

2

but I don't understand why it never happens with integers?

It does, the integer just has to be really big before they hit the limits of the IEEE-754 format:

var a = 9007199254740992;
console.log(a);      // 9007199254740992
var b = a + 1;
console.log(b);      // still 9007199254740992
console.log(a == b); // true
T.J. Crowder
  • 1,031,962
  • 187
  • 1,923
  • 1,875
0

Floating point formats, such as IEEE-754 are essentially an expression that describes the value as the following:

value := sign * mantissa * 2 ^ exponent

The mantissa is an integer of various sizes. For four byte floating point, the mantissa is 24 bits and, for eight byte floating point, the mantissa is 48 bits. If the exponent is 0, the value of the expression is determined only by the sign and the mantissa. This is, in fact, how integers are represented by JavaScript.

What seems to take most people by surprise is due to the base 2 exponent instead of base 10. We accept, that, in base 10, the result of 1/3 or 2/3 cannot be exactly represented without an infinite number of digits or the acceptance of round-off error. Similarly, there are fractions in base 2 that have similar issues. Unfortunately for our base 10 mindset, these fractions most often involve negative powers of 10.

Jon Trauntvein
  • 4,453
  • 6
  • 39
  • 69