0

From everything I've been able to find online, JavaScript allegedly uses IEEE 754 doubles for its numbers, but I have found numbers that can work in C doubles, but not in JavaScript. For example,

#include <stdio.h>

int main(){
    double x = 131621703842267136.;
    printf("%lf\n", x);
}

prints 131621703842267136.000000 NOTE: IN AN EARLIER VERSION OF THE QUESTION I COPPIED THE WRONG NUMBER FOR C, but in JavaScript

console.log(131621703842267136)

outputs 131621703842267140. From everything I've read online, both C doubles and JavaScript numbers are 64-bit floating point, so I am very confused why they would output different results. Any ideas?

Oscar Smith
  • 5,766
  • 1
  • 20
  • 34
  • `131621703842267136 > Number.MAX_SAFE_INTEGER` in JavaScript – VLAZ May 17 '19 at 03:45
  • 1
    while that is true, it still should be safe as it equals `6**22`, so the only non power of 2 bit is `3**22` which is much smaller, so it is storable as a double – Oscar Smith May 17 '19 at 03:46
  • 1
    For proof of this, C is able to store it in a double, so this question isn't a duplicate – Oscar Smith May 17 '19 at 03:46
  • 2
    The output `131621703737409536.000000` differs from the input `131621703842267136` on `C` in your question. Look at the last `4` digits of the integer part, for example. – Shidersz May 17 '19 at 03:48
  • 2
    afaik IEEE 754 defines both float and double, and javascript uses IEEE 754 doubles. https://en.wikipedia.org/wiki/Double-precision_floating-point_format – Oscar Smith May 17 '19 at 03:50
  • 1
    That's rather surprising output from C: only the first 9 digits are correct. Do you see the same output if you use "%f" instead of "%lf"? (The "l" isn't needed here.) – Mark Dickinson May 17 '19 at 07:00
  • Anyway, independent of the C output, this is almost certainly just due to the way that JavaScript is *displaying* the float, not to do with the actual value that's stored. That is, the actual value in JS is almost certainly still `131621703842267136.0`, and it's being rounded for display purposes. You could verify this by (for example) computing `131621703842267136 - 131621703842200000`, which should show `67136.0`. (Note that 131621703842200000 is also exactly representable in IEEE 754 binary64 format.) – Mark Dickinson May 17 '19 at 07:14
  • 1
    @VLAZ: The IEEE 754 standard specifies a variety of binary and decimal floating-point formats, including the binary64 format (informally, "double precision") that JavaScript is specified to use for all numbers, and that C implementations commonly use (but are not required to use) for the "double" type. – Mark Dickinson May 17 '19 at 07:26
  • @MarkDickinson my mistake, then. If that's the case, I'm not sure what's happening. To be honest, this is getting out of my depth but JS does consider 131621703842267136 to be [over the maximum integer it can safely process](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number/MAX_SAFE_INTEGER). `131621703842267136 + 1` yields `131621703842267140` and `(131621703842267136 + 1) - 131621703842200000` gives you `67136` - the same as `131621703842267136 - 131621703842200000` – VLAZ May 17 '19 at 07:31
  • I don't think you can compare these without mentioning what implementations do you use: Which C compiler? Which Javascript runtime? – user694733 May 17 '19 at 07:40
  • @user694733: The JavaScript runtime is irrelevant since the behavior is specified by the ECMAScript specification. – Eric Postpischil May 17 '19 at 11:11
  • 2
    Which C implementation are you using? Per my answer, it appears to be violating the C standard. Would you double-check that the C source in the question produces the output shown in the question? – Eric Postpischil May 17 '19 at 11:27
  • @EricPostpischil An implementation with a 32bit `double`? https://wandbox.org/permlink/XYOaQfW8EueQXJOO – Bob__ May 17 '19 at 13:32
  • @Bob__: `double` is required to be able to preserve ten decimal digits. C 2018 5.2.4.2.2 12 says `DBL_DIG` must be at least ten, and it is the number of digits such that any decimal numeral with `DBL_DIG` significant decimal digits (such as 1.234567809•10^77) can be rounded to a `double` and back to a `DBL_DIG`-digit decimal numeral without change. A 32-bit double cannot satisfy that. – Eric Postpischil May 17 '19 at 13:42

1 Answers1

3

JavaScript’s default conversion of a Number to a string produces just enough decimal digits to uniquely distinguish the Number. (This arises out of step 5 in clause 7.1.12.1 of the ECMAScript 2018 Language Specification, which I explain a little here.) Formatting via console.log is not covered by the ECMAScript specification, but likely the Number is converted to a string using the same rules as for NumberToString.

Since stopping at the ten’s digit, producing 131621703842267140, is enough to distinguish the floating-point number from its two neighboring representable values, 131621703842267120 and 131621703842267152, JavaScript stops there.

You can request more digits with toPrecision; the following produces “131621703842267136.000”:

var x = 131621703842267136;
console.log(x.toPrecision(21))

(Note that 131621703842267136 is exactly representable in IEEE-754 basic 64-bit binary format, which JavaScript uses for Number, and many C implementations use for double. So there are no rounding errors in this question due to the floating-point format. All changes result from conversions between decimal and floating-point.)

Prior to an edit at 2019-05-17 16:27:53 UTC, the question stated that a C program was showing “131621703737409536.000000” for 131621703842267136. That would not have been conforming to the C standard. The C standard is lax about its floating-point formatting requirements, but producing “131621703737409536.000000” for 131621703842267136 violates them. This is governed by this sentence in C 2018 (and 2011) 7.21.6.1 13:

Otherwise, the source value is bounded by two adjacent decimal strings L < U, both having DECIMAL_DIG significant digits; the value of the resultant decimal string D should satisfy LDU, with the extra stipulation that the error should have a correct sign for the current rounding direction.

DECIMAL_DIG must be at least ten, by 5.2.4.2.2 12. The number 131621703842267136 (bold marks the tenth digit) is bounded by the two adjacent ten-digit strings “131621703800000000” and “131621703900000000”. The string “131621703737409536.000000” is not between these.

This also cannot be a result of the C implementation using a different floating-point format for double, as 5.2.4.2.2 requires the format be sufficient to convert at least ten decimal digits to double and back to decimal without change to the value.

Eric Postpischil
  • 195,579
  • 13
  • 168
  • 312