Numbers in JavaScript are IEEE-754 double-precision binary floating point, a fairly compact format (64 bits) which provides for fast calculations and a vast range. It does this by storing the number as a sign bit, an 11-bit exponent, and a 52-bit significand (although through cleverness it actually gets 53 bits of precision). It's binary (base 2) floating point: The significand (plus some cleverness) gives us the value, and the exponent gives us the magnitude of the number.
Naturally, with just so many significant bits, not every number can be stored. Here is the number 1, and the next highest number after 1 that the format can store, 1 + 2-52 ≈ 1.00000000000000022, and the next highest after that 1 + 2 × 2-52 ≈ 1.00000000000000044:
+--------------------------------------------------------------- sign bit
/ +-------+------------------------------------------------------ exponent
/ / | +-------------------------------------------------+- significand
/ / | / |
0 01111111111 0000000000000000000000000000000000000000000000000000
= 1
0 01111111111 0000000000000000000000000000000000000000000000000001
≈ 1.00000000000000022
0 01111111111 0000000000000000000000000000000000000000000000000010
≈ 1.00000000000000044
Note the jump from 1.00000000000000022 to 1.00000000000000044; there's no way to store 1.0000000000000003. That can happen with integers, too: Number.MAX_SAFE_INTEGER
(9,007,199,254,740,991) is the highest positive integer value that the format can hold where i
and i + 1
are both exactly representable (spec). Both 9,007,199,254,740,991 and 9,007,199,254,740,992 can be represented, but the next integer, 9,007,199,254,740,993, cannot; the next integer we can represent after 9,007,199,254,740,992 is 9,007,199,254,740,994. Here are the bit patterns, note the rightmost (least significant) bit:
+--------------------------------------------------------------- sign bit
/ +-------+------------------------------------------------------ exponent
/ / | +-------------------------------------------------+- significand
/ / | / |
0 10000110011 1111111111111111111111111111111111111111111111111111
= 9007199254740991 (Number.MAX_SAFE_INTEGER)
0 10000110100 0000000000000000000000000000000000000000000000000000
= 9007199254740992 (Number.MAX_SAFE_INTEGER + 1)
x xxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
9007199254740993 (Number.MAX_SAFE_INTEGER + 2) can't be stored
0 10000110100 0000000000000000000000000000000000000000000000000001
= 9007199254740994 (Number.MAX_SAFE_INTEGER + 3)
Remember, the format is base 2, and with that exponent the least significant bit is no longer fractional; it has a value of 2. It can be off (9,007,199,254,740,992) or on (9,007,199,254,740,994); so at this point, we've started to lose precision even at the whole number (integer) scale. Which has implications for our loop!