2

While practicing code golf on a discord server we found that our code would stop working once the number got to big and results would start getting messy.

The main calculation is a=a*10+!(a%2) and the first a in the series we produced that would result in unexpected results was a=1010101010101010 (Expected: 10101010101010101 vs Actual: 10101010101010100).

When investigating I found that the calculation 10101010101010100 + 1 = 10101010101010100.
After seeing that, I tried 10101010101010100 + 2 = 10101010101010102 which confused me. Why would adding 1 result in the same value and adding 2 would not?

More examples:

10101010101010100 + 3 = 10101010101010104
10101010101010102 + 1 = 10101010101010104
10101010101010104 + 1 = 10101010101010104

I also tried putting the numbers in Number() to prevent any autoconversion (it's JS after all) but the results were the same.

After a short delay What is JavaScript's highest integer value that a number can go to without losing precision? was posted in said discord and this seems to explain why the results were like they were but I am curious as to what happens underneath the surface of the console that produces these rather unexpected results. Or is the behaviour of arithmetic operations on numbers that big simply defined as undefined in JS?

geisterfurz007
  • 5,292
  • 5
  • 33
  • 54
  • 1
    First of all, if you want to work with binary numbers you probably should not represent them as decimal numbers – Bergi Feb 01 '18 at 20:22
  • 2
    [Number.MAX_SAFE_INTEGER](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number/MAX_SAFE_INTEGER) – James Feb 01 '18 at 20:22
  • All JS numbers are technically processed using floating point arithmetic. I can't tell you exactly why your behavior occurs, but it's probably because floating point accuracy fails at very small and very large values due to the design of the spec. https://en.wikipedia.org/wiki/Floating-point_arithmetic#Range_of_floating-point_numbers – Shadetheartist Feb 01 '18 at 20:24
  • 4
    Arithmetic operations are well defined as using 64 bit floating point numbers in JS. And it seems that you already found the culprit: the integer you are interested in is in a range where not for every consecutive integer there is a valid `double` bit pattern representing it – Bergi Feb 01 '18 at 20:24
  • @James Thanks for the link but that page only reads as well that... Well... Integers above the limit of Number.MAX_SAFE_VALUE are not save to use basically but not what causes the issues in the implementation. – geisterfurz007 Feb 01 '18 at 20:25
  • 2
    There isn't really a way to answer this without explaining in detail how computers implement floating point numbers, and there are more than enough resources on the Internet already doing just that. Wikipedia is a good place to start. – JJJ Feb 01 '18 at 20:28
  • @Bergi So there is a valid double bit pattern to represent 10101010101010102 for example but not 10101010101010101? Meaning JS implementation is consistent and it just cannot do better than that? And each of the sums in the block in the question results in the same valid bit pattern? – geisterfurz007 Feb 01 '18 at 20:28
  • @geisterfurz007 yes it's just a consequence of the implementation. You can use a library like https://github.com/MikeMcl/decimal.js/ to work with number with higher accuracy. – Shadetheartist Feb 01 '18 at 20:32
  • 1
    Even adding 0 will not produce the same number as some of those, since those numbers are already different from what you type. See https://stackoverflow.com/questions/35727608/why-does-number-return-wrong-values-with-very-large-integers – trincot Feb 01 '18 at 20:33
  • 1
    It's like scientific notation (1000 = 1.000 x 10^3). If I can afford three decimal places to store the mantissa (1.000 vs say 1.001) I can tell the difference between 1000 and 1001, however if I only have two decimal places I can only store (1.00 x 10^3) and (1.00 x 10^3) they will seem equal. – James Feb 01 '18 at 20:41
  • @geisterfurz007 Yes, exactly that. Every value in your calculation is [rounded](http://floating-point-gui.de/errors/rounding/) to the nearest representable double. – Bergi Feb 02 '18 at 00:29

1 Answers1

3

See this documentation.

It mentions that the value of a JavaScript Number is represented with 52 bits. As your number gets closer and closer to exceeding that number of bits, fewer and fewer numbers in a given range can be represented.

For numbers between Math.pow(2, 52) and Math.pow(2, 53), every whole number can be represented. ie:

Math.pow(2, 52) === Math.pow(2, 52) + 0.5

For numbers between Math.pow(2, 53) and Math.pow(2, 54) (which is where 10101010101010100 lies), every second whole number can be represented, followed by every fourth whole number, and so on, as the powers of your range grow.

console.log('2^52 === 2^52 + 0.5 :', 
  Math.pow(2, 52) === Math.pow(2, 52) + 0.5);
  
console.log('2^53 === 2^53 + 1   :', 
  Math.pow(2, 53) === Math.pow(2, 53) + 1);
Xeraqu
  • 195
  • 2
  • 13
  • 2
    This makes a lot more sense now to me! This answer and the last comment by James ("[...] If I can afford three decimal points [...]") made me understand this. I actually didn't even know that the precision gets worse and worse the closer you get to Number.MAX_SAFE_VALUE... Thanks for your time and effort! – geisterfurz007 Feb 01 '18 at 20:50