3

Since integers above 2^53 can't be accurately represented in doubles, how does JS decide on their decimal representation when they are printed as strings?

For example, 2^55 is 36028797018963968, and printf("%lf",(double)(1LL<<55)) in C will print that number correctly, since it has trailing zeroes in its binary representation that do not cause precision loss when truncated.

However, in Javascript, we get 36028797018963970 instead. It seems to try to round numbers to get a 0 at the end, but not always - for instance, 2^55-4 is represented correctly with 4 at the end.

Is there some place in the spec that defines this weird behavior?

riv
  • 6,846
  • 2
  • 34
  • 63
  • I don't know off the top of my head, but this thread seems like a good place to start: https://stackoverflow.com/questions/30678303/extremely-large-numbers-in-javascript – jmcgriz Jul 24 '18 at 15:34
  • I don't need a workaround, I want to know how the default behavior works. – riv Jul 24 '18 at 15:37
  • 3
    It [is specified](https://www.ecma-international.org/ecma-262/5.1/#sec-9.8.1) (not pretending this is a simple lecture). And yes you're right: `N%10` doesn't necessarily give the same digit than `(""+N).slice(-1)`. – Denys Séguret Jul 24 '18 at 15:41
  • Ah, thanks, so it looks for the shortest (in terms of significant digits) number that maps to the same double value. Wonder why they decided to differ from existing number formatting implementations, though. – riv Jul 24 '18 at 15:55
  • I can't really follow the algorithm there, but I wonder if it's related to [Steele](https://lists.nongnu.org/archive/html/gcl-devel/2012-10/pdfkieTlklRzN.pdf) or [Burger-Dybvig](https://www.cs.indiana.edu/~dyb/pubs/FP-Printing-PLDI96.pdf) – Barmar Jul 24 '18 at 16:07
  • The important part is step 5, where it looks for the shortest number `s` that, when converted to `Number`, produces the same binary representation. So the ...70 number is shorter than ...68 because the 0 at the end can be chopped off. – riv Jul 24 '18 at 16:13

1 Answers1

1

Question- Since integers above 2^53 can't be accurately represented in doubles, how does JS decide on their decimal representation when they are printed as strings?



1. Way of Printing decimal numbers in JS

JavaScript numbers are internally stored in binary floating point and usually displayed in the decimal system.

There are two decimal notations used by JavaScript:

Fixed notation

[ "+" | "-" ] digit+ [ "." digit+ ]

Exponential notation

[ "+" | "-" ] digit [ "." digit+ ] "e" [ "+" | "-" ] digit+

An example of exponential notation is 1.2345678901234568e+21.

Rules for Displaying decimal numbers:

A. Use exponential notation if there are more than 21 digits before the decimal point.

B. Use exponential notation if the number starts with “0.” followed by more than five zeros.


2. The ECMAScript 5.1 display algorithm

Here is a details of Sect. 9.8.1 of the ECMAScript 5.1 specification describes the algorithm for displaying a decimal number

Given a number

mantissa × 10^pointPos−digitCount

The mantissa of a floating point number is an integer – the significant digits plus a sign. Leading and trailing zeros are discarded. Examples: The mantissa of 12.34 is 1234.

Case-1. No decimal point: digitCount ≤ pointPos ≤ 21 Print the digits (without leading zeros), followed by pointPos−digitCount zeros.

Case-2. Decimal point inside the mantissa: 0 < pointPos ≤ 21, pointPos < digitCount Display the pointPos first digits of the mantissa, a point and then the remaining digitCount−pointPos digits.

Case-3. Decimal point comes before the mantissa: −6 < pointPos ≤ 0 Display a 0 followed by a point, −pointPos zeros and the mantissa.

Case-4. Exponential notation: pointPos ≤ -6 or pointPos > 21 Display the first digit of the mantissa. If there are more digits then display a point and the remaining digits. Next, display the character e and a plus or minus sign (depending on the sign of pointPos−1), followed by the absolute value of pointPos−1. Therefore, the result looks as follows.

mantissa0 [ "." mantissa1..digitCount ] 
     "e" signChar(pointPos−1) abs(pointPos−1)

Question- However, in Javascript, we get 36028797018963970 instead. It seems to try to round numbers to get a 0 at the end, but not always - for instance, 2^55-4 is represented correctly with 4 at the end.

Is there some place in the spec that defines this weird behavior?



Check: How numbers are encoded in JavaScript specially ==>5. The maximum integer


Additional Reference: https://medium.com/dailyjs/javascripts-number-type-8d59199db1b6

NullPointer
  • 7,094
  • 5
  • 27
  • 41
  • 2
    Thanks for the detailed response, but I think you missed out the key part, how matntissa is chosen. If there are multiple integers corresponding to the same binary form, it chooses the one with the most trailing zeroes, which causes the behavior I mentioned. – riv Jul 27 '18 at 07:00
  • Thanks.I just check my reference link was not correct.I updated hope all things are covered with reference. http://2ality.com/2012/04/number-encoding.html – NullPointer Jul 27 '18 at 10:56