33

Can any one explain to me why 9999999999999999 is converted to 10000000000000000?

alert(9999999999999999); //10000000000000000

http://jsfiddle.net/Y2Vb2/

ajax333221
  • 11,436
  • 16
  • 61
  • 95
Ramaraj Karuppusamy
  • 2,350
  • 5
  • 24
  • 35
  • 3
    That number is too large to fit in an integer, so is possibly converted to a double. Floating point numbers are not exact. – John Saunders Nov 17 '12 at 09:51
  • 3
    as @Kos said all javascript numbers are double precision floating point numbers, that means you have only 16 digit of precision, and `9999999999999999` clearly passes that limit – Ravi Nov 17 '12 at 10:05
  • 7
    Related: The highest integer that can be used with 100% precision in javascript is **+/- 9007199254740992** (or `Math.pow(2,53)`): http://stackoverflow.com/questions/307179/what-is-javascripts-max-int-whats-the-highest-integer-value-a-number-can-go-t – David Hellsing Nov 17 '12 at 10:08

6 Answers6

42

Javascript doesn't have integers, only 64-bit floats - and you've ran out of floating-point precision.

See similar issue in Java: why is the Double.parseDouble making 9999999999999999 to 10000000000000000?

Community
  • 1
  • 1
Kos
  • 70,399
  • 25
  • 169
  • 233
20
  1. JavaScript only has floating point numbers, no integers.

  2. Read What Every Computer Scientist Should Know About Floating-Point Arithmetic.

    Summary: floating point numbers include only limited precision, more than 15 digits (or so) and you'll get rounding.

Community
  • 1
  • 1
Richard
  • 106,783
  • 21
  • 203
  • 265
11

9999999999999999 is treated internally in JavaScript as a floating-point number. It cannot be accurately represented in IEEE 754 double precision as it would require 54 bits of precision (the number of bits is log2(9999999999999999) = 53,150849512... and since fractional bits do not exist, the result must be rouned up) while IEEE 754 provides only 53 bits (1 implict bit + 52 explicitly stored bits of the mantissa) - one bit less. Hence the number simply gets rounded.

Since only one bit is lost in this case, even 54-bit numbers are exactly representable, since they nevertheless contain 0 in the bit, which gets lost. Odd 54-bit numbers are rounded to the nearest value that happens to be a doubled even 53-bit number given the default unbiased rounding mode of IEEE 754.

Hristo Iliev
  • 72,659
  • 12
  • 135
  • 186
  • 3
    Odd numbers are not always rounded up. Usual practice is to use banker's rounding, so they get rounded to the nearest multiple of four. A quick test in the Google Chrome console confirms this. – Dietrich Epp Nov 17 '12 at 12:43
  • IEEE has several different rounding modes, but by default (which is what JS uses) it rounds halfway-cases towards the closest even binary number. In some circumstances this is a multiple of four, but not always. – Florian Loitsch May 10 '13 at 12:50
5

Why 9999999999999999 is converted to 10000000000000000 ?

All numbers in JavaScript are stored in 64-bit format IEEE-754, also known as “double precision”, So there are exactly 64 bits to store a number: 52 of them are used to store the digits, 11 of them store the position of the decimal point (they are zero for integer numbers), and 1 bit is for the sign.

If a number is too big, it would overflow the 64-bit storage, potentially giving an infinity:

alert( 1e500 );  
// Result => Infinity
// "e" multiplies the number by 1 with the given zeroes count.

If we check whether the sum of 0.1 and 0.2 is 0.3, we get false.

alert( 0.1 + 0.2 == 0.3 )

Strange! What is it then if not 0.3? This happens because, A number is stored in memory in its binary form, a sequence of ones and zeroes. But fractions like 0.1, 0.2 that look simple in the decimal numeric system are actually unending fractions in their binary form.

In other words, what is 0.1? It is one divided by ten 1/10, one-tenth. In decimal numeral system such numbers are easily re-presentable. Compare it to one-third: 1/3. It becomes an endless fraction 0.33333(3).

There’s just no way to store exactly 0.1 or exactly 0.2 using the binary system, just like there is no way to store one-third as a decimal fraction.

The numeric format IEEE-754 solves this by rounding to the nearest possible number. These rounding rules normally don’t allow us to see that “tiny precision loss”, so the number shows up as 0.3. But beware, the loss still exists.

As you see :

alert( 9999999999999999 ); // shows 10000000000000000

This suffers from the same issue: a loss of precision. There are 64 bits for the number, 52 of them can be used to store digits, but that’s not enough. So the least significant digits disappear.

What is Really Happening behind 9999999999999999 to 10000000000000000 is :

JavaScript doesn’t trigger an error in such events. It does its best to fit the number into the desired format, but unfortunately, this format is not big enough.

Reference : https://javascript.info/number

You can also refer this SO Question, it includes the very detail about the JavaScript Numbers.

Tapas Thakkar
  • 845
  • 8
  • 19
3

Question: Sometimes JavaScript computations seem to yield "inaccurate" results, e.g. 0.362*100 yields 36.199999999999996. How can I avoid this?

Answer: Internally JavaScript stores all numbers in double-precision floating-point format, with a 52-bit mantissa and an 11-bit exponent (the IEEE 754 Standard for storing numeric values). This internal representation of numbers may cause unexpected results like the above. Most integers greater than 253 = 9007199254740992 cannot be represented exactly in this format. Likewise, many decimals/fractions, such as 0.362, cannot be represented exactly, leading to the perceived "inaccuracy" in the above example. To avoid these "inaccurate" results, you might want to round the results to the precision of the data you used.

http://www.javascripter.net/faq/accuracy.htm

1

9999999999999999 in binary form is 100011100001101111001001101111110000001111111111111111 which has 54 digits.

Below we will convert this figure to Javascript IEEE-754 which has 1 digit for sign, 11 digits for mantissa in binary offset format and 52 signs for the number itself.

In binary form the first digit is always 1 so Javascript omits the first digit of the number in mantissa when saving to IEEE-754 format. So, we will have 00011100001101111001001101111110000001111111111111111 for mantissa, which is 53 digits and as for the number we can keep only 52 digits we round the number removing last digit 0001110000110111100100110111111000001000000000000000

the final number in binary form will be 1 0001110000110111100100110111111000001000000000000000 0 which in decimal form is 10000000000000000

1 is the first digit that is not written to 52 bits in mantissa, then 52 bits of mantissa and one 0 to make it 54 digits back which is 10000000000000000 in decimal

That might be hard to understand unless you read this beautiful article

iLyas
  • 305
  • 6
  • 16