9

As per the standard ES implements numbers as IEEE754 doubles.

And per https://www.binaryconvert.com/result_double.html?decimal=053055050054055049056048053048053054056053048051050057054 and other programming languages https://play.golang.org/p/5QyT7iPHNim it looks like the 5726718050568503296 value can be represented exactly without losing precision.

Why it loses 3 significant digits in JS (reproduced in latest stable google chrome and firefox)

This question was triggered initially from the replicate javascript unsafe numbers in golang

The value is definitely representible in double IEEE754, see how naked bits are converted to a float64 in Go: https://play.golang.org/p/zMspidoIh2w

Ken Wayne VanderLinde
  • 18,915
  • 3
  • 47
  • 72
zerkms
  • 249,484
  • 69
  • 436
  • 539
  • Type `5726718050568503296 > Number.MAX_SAFE_INTEGER` in your browser console. – Pointy Apr 22 '21 at 23:45
  • @Pointy `Number.MAX_SAFE_INTEGER` only guarantees the values below are safely representible. But it does not say that everything above those values are unsafe. – zerkms Apr 22 '21 at 23:46
  • Yes it does, otherwise "max safe integer" would be a ridiculous name for the constant. – Pointy Apr 22 '21 at 23:47
  • @Pointy check its description - https://tc39.es/ecma262/#sec-number.max_safe_integer. It means not what you think it means – zerkms Apr 22 '21 at 23:47
  • @ASDFGerte it is, see the binaryconvert link – zerkms Apr 22 '21 at 23:48
  • Why are you questioning something that is plainly true? Your value is too big to represent exactly. – Pointy Apr 22 '21 at 23:48
  • 2
    IEEE 754 doubles have 51 bits of mantissa. That's about 17 decimal digits. Your number has 19 digits. – Barmar Apr 22 '21 at 23:48
  • 1
    @Pointysee the binaryconvert, it's EXACTLY REPRESENTIBLE in IEEE754 doubles. – zerkms Apr 22 '21 at 23:48
  • Well I guess all of our computers are broken then. – Pointy Apr 22 '21 at 23:48
  • @Barmar binaryconvert and Go can exactly represent it though? – zerkms Apr 22 '21 at 23:49
  • That website is simply wrong. – Pointy Apr 22 '21 at 23:50
  • Btw, IEEE754 float64 have 52bits mantissa, and with the leading 1 convention, that's 53 bits precision, not 51. – ASDFGerte Apr 22 '21 at 23:54
  • C++ https://ideone.com/Fjj4rK but Java: https://ideone.com/dK3Bk9 :shrug: – zerkms Apr 22 '21 at 23:57
  • Ah, my above was even more wrong - it only needs to be a power of two multiple of some representable integer (minus one digit precision or something, i didnt finish thinking). What happens, when we construct the exact representation in a typed array, and read that? – ASDFGerte Apr 23 '21 at 00:03
  • It could be that the issue involves the way the runtime accumulates the numeric value when parsing the text of the constant. If it's using the language environment's native floating point *incrementally* as it parses, then an overflow *before* it gets to the last digit will cause a truncation. – Pointy Apr 23 '21 at 00:06
  • An interesting experiment in JavaScript would be to construct an 8-byte UInt8Array and fill it with the binary that should be correct, and then see what mapping that buffer to a Float64Array of 1 element gives you. – Pointy Apr 23 '21 at 00:08
  • 1
    @zerkms 5726718050568503296 = 2^18 * 21845695688509, and 21845695688509 only requires 45 bits. So you're absolutely right that it's representable. – Ken Wayne VanderLinde Apr 23 '21 at 00:09
  • @KenWayneVanderLinde well it's definitely bigger than `Number.MAX_SAFE_INTEGER` – Pointy Apr 23 '21 at 00:10
  • 1
    @Pointy have you checked what `MAX_SAFE_INTEGER` means? It's not what you think it means: https://tc39.es/ecma262/#sec-number.max_safe_integer It's for representability of a `R` and `R+1`. – zerkms Apr 23 '21 at 00:11
  • I'm thinking that another *possible* explanation is that in some runtimes there may be code that uses the 80-bit precision available internally (and, to some extent, externally) on Intel CPUs. (Not just Intel obviously, but you know what I mean.) – Pointy Apr 23 '21 at 00:12
  • 1
    @Pointy That really has nothing do with it. There are a lots of representable integers above `Number.MAX_SAFE_INTEGER`. – Ken Wayne VanderLinde Apr 23 '21 at 00:13
  • The 80-bit precision is a really interesting possibility. – Ken Wayne VanderLinde Apr 23 '21 at 00:13
  • @KenWayneVanderLinde yes but this number definitely requires more mantissa bits. Recall that the mantissa represents a binary *fraction*, not a binary integer. – Pointy Apr 23 '21 at 00:14
  • @Pointy Same difference :) The only way that matters is for the exponent, but the mantissa representation is identical either way. – Ken Wayne VanderLinde Apr 23 '21 at 00:20
  • I have reconstructed the value in Go from naked bits: https://play.golang.org/p/zMspidoIh2w So the value is totally representible. – zerkms Apr 23 '21 at 00:21
  • @Pointy ^ ^ ^ ^ – zerkms Apr 23 '21 at 00:22
  • @Barmar ^ ^ ^ ^ – zerkms Apr 23 '21 at 00:24
  • Yes. but as I said *something* has to read the *textual representation* of the value and perform a series of multiplications and additions in order to create the internal 64-bit floating point value. If along the way there's an unrepresentable intermediate value, you can't end up with the final correct value. – Pointy Apr 23 '21 at 00:24
  • 1
    @Pointy "you can't end up with the final correct value" --- I have just demonstrated **you can**: https://play.golang.org/p/zMspidoIh2w – zerkms Apr 23 '21 at 00:24
  • And as I said the "naked bits" approach can be done in JavaScript too with typed arrays, if you want to experiment. – Pointy Apr 23 '21 at 00:25
  • @Pointy my question is originally: "why in JS this number is broken". Given it's broken, what's the point to check it the second time? I checked it with typed arrays, it returns a value with lost precision. My question stays the same: why? – zerkms Apr 23 '21 at 00:25
  • But @zerkms that's a Go example. JavaScript is not the same runtime. I have no idea what Go does to parse numeric constants but it's far from unbelievable that it might be different than what a JavaScript runtime does. – Pointy Apr 23 '21 at 00:25
  • @Pointy "hat Go does to parse numeric constants" --- it does not parse numeric literals in my code, it fills the bits of ieee754 double. "JavaScript is not the same runtime" --- hence a question: why JS breaks a valid double. – zerkms Apr 23 '21 at 00:26
  • @zerkms there's a difference between a string of decimal digits in UTF-8 and a binary floating point value in internal representation. Creating a UInt8Array with what you have discovered to be the working byte values for the representation, then using the same buffer to back a Float64Array will reveal whether the binary representation for the value "works" in JavaScript. – Pointy Apr 23 '21 at 00:27
  • @Pointy I checked it already: it does not work. Exactly the same way how the numeric literal does not work: `var buf = new ArrayBuffer(8); var view = new DataView(buf); var data = [0x43, 0xD3, 0xDE, 0x58, 0xEE, 0x6F, 0x3D, 0x00]; data.forEach(function (b, i) { view.setUint8(i, b); }); var num = view.getFloat64(0); console.log(num);` My question is: why it breaks a **valid value**. – zerkms Apr 23 '21 at 00:28
  • You can do exactly the same thing as your Go example in JavaScript. – Pointy Apr 23 '21 at 00:28
  • I can. And it produces the **wrong** result. The same way like `var a = 5726718050568503296` produces the **wrong** result. – zerkms Apr 23 '21 at 00:29
  • 4
    Btw, `5726718050568503296 / 0b10011110111100101100011101110011011110011110100` is exactly `65536`, aka `2**16`. isn't just the string representation wrong? What i mean is, the float is correct, but the decimal string isn't. – ASDFGerte Apr 23 '21 at 00:36
  • lol @ASDFGerte indeed if you call `.toPrecision(20)` you get the right answer :) – Pointy Apr 23 '21 at 00:39
  • 2
    Oof i think i am too tired, i don't know anymore, if i am making some really stupid mistakes, or not. Anyways, writing the number to a typed array, and looking at the bits, it's fully correct. – ASDFGerte Apr 23 '21 at 00:43
  • @ASDFGerte I'm confused, do you agree now the value is representible (in ieee754 double)? :-) – zerkms Apr 23 '21 at 00:47
  • @ASDFGerte no you found the actual issue: the default `.toString()` for the value is doing the truncation. The `.toPrecision(20)` (or probably 19) creates a string that shows the correct original value. – Pointy Apr 23 '21 at 00:48
  • 3
    It is, and javascript represents it correctly, but the default `toString` produces a wrong decimal from it. – ASDFGerte Apr 23 '21 at 00:48
  • 1
    Omg, `(5726718050568503296).toFixed()` – zerkms Apr 23 '21 at 00:48
  • 1
    @ASDFGerte put that as an answer please – zerkms Apr 23 '21 at 00:48
  • A lot of `toFixed()` ends up becoming `"5726718050568503296"`, including `(5726718050568503290).toFixed()` – Matt Apr 23 '21 at 00:55
  • @Matt that's right, because not all rational numbers are representible in IEEE754 double space. – zerkms Apr 23 '21 at 00:56
  • It's interesting but I cannot see https://tc39.es/ecma262/#sec-numeric-types-number-tostring anything that would explain why `Number::toString()` truncates, hmmmm – zerkms Apr 23 '21 at 00:57
  • 1
    Thanks, this is unfamiliar territory for me, I've learned a lot in this thread. Fwiw, BigInt's toString() seems to handle it fine `BigInt(5726718050568503296).toString()` – Matt Apr 23 '21 at 00:59
  • It looks like whatever formats the numeric values just assumes there cannot be more than 16 significant digits in a `Number` so it simply nullifies the tail. – zerkms Apr 23 '21 at 01:01
  • 2
    I started reading that part of the spec as well, but tbh i probably just have to stop now. It's 3am here, and i am dead tired. I can't write a full answer right now, as it takes time to determine, whether the spec simply allows this (most likely), or there is something else afoot. Maybe someone knows, and writes an answer, then i can happily read it after a good night's sleep. – ASDFGerte Apr 23 '21 at 01:03
  • @ASDFGerte thank for your time anyway :-) – zerkms Apr 23 '21 at 01:04
  • Another rabbithole would be [BigInt toString source](https://github.com/WebKit/WebKit/blob/8afe31a018b11741abdf9b4d5bb973d7c1d9ff05/Source/JavaScriptCore/runtime/JSBigInt.cpp#L346) vs [Number toString source](https://github.com/WebKit/WebKit/blob/8afe31a018b11741abdf9b4d5bb973d7c1d9ff05/Source/WTF/wtf/dtoa/double-conversion.cc#L48) I guess. – Matt Apr 23 '21 at 01:12
  • @RReverser https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=fef6c8987fcb9c84d767e70cdb01d184 – zerkms Apr 23 '21 at 01:24
  • 1
    > It's interesting but I cannot see tc39.es/ecma262/#sec-numeric-types-number-tostring anything that would explain why Number::toString() truncates You need to look at step 5, which says to choose minimal `k` for stringification that still produces a number equal (in terms of floating number comparisons) to the original. – RReverser Apr 23 '21 at 01:31
  • @RReverser yep `5726718050568503296 === 5726718050568503000`, still makes no sense though why deliberately cut precision :-S – zerkms Apr 23 '21 at 01:33
  • @zerkms I can only assume, to make sure that `String(a) == String(b)` would still work if `a == b` on the original numbers. That is, it's a matter of normalization. – RReverser Apr 23 '21 at 01:35
  • @Pointy: The specification of `Number.MAX_SAFE_INTEGER` is it is the largest integer n such that n and n+1 are both exactly representable as a `Number` value, per 20.1.2.6 in ECMAScript 2020 Language Specification. (JavaScript is an implementation of ECMAScript.) This specification implies n+2 is not representable as a `Number` but says nothing about larger integers; they may or may not be representable. Whether the name is ridiculous or not is irrelevant; the text of the specification is what governs. – Eric Postpischil Apr 23 '21 at 11:21
  • @Barmar: IEEE-754 binary64 has 53 bits in the significand (52 encoded in the primary significand field, 1 encoded by way of the exponent field). The number of decimal digits is not particularly relevant; 5,726,718,050,568,503,296 has only 45 significant bits, so it is representable in binary64 regardless of the number of decimal digits needed to display it. – Eric Postpischil Apr 23 '21 at 11:23
  • @ASDFGerte: The significand (which is the preferred name; “mantissa” is an old word for the fraction portion of a logarithm) of an IEEE-754 binary64 is 53 bits, not 52. The primary field used to encode it is 52 bits, but the encoding is not the value. The full value is formed from those 52 bits plus another bits encoded by way of the exponent. The **actual** significand is 53 bits, and one should avoid thinking of the 52 bits alone as the significand. – Eric Postpischil Apr 23 '21 at 11:25
  • @Pointy: Re “this number definitely requires more mantissa bits”: No, this number is 1.0011110111100101100011101110011011110011•2^62, and that is only 45 bits in the significand. – Eric Postpischil Apr 23 '21 at 11:27
  • 1
    @Pointy: Re “something has to read the textual representation of the value and perform a series of multiplications and additions in order to create the internal 64-bit floating point value. If along the way there's an unrepresentable intermediate value, you can't end up with the final correct value”: This cannot happen in a conforming JavaScript implementation. Clause 11.8.3.2 of ECMAScript 2020 Language Specification requires correct rounding for numerals with fewer than 21 digits. This number has 19. – Eric Postpischil Apr 23 '21 at 11:58

1 Answers1

13

The default rule for JavaScript when converting a Number value to a decimal numeral is to use just enough digits to distinguish the Number value. Specifically, this arises from step 5 in clause 7.1.12.1 of the ECMAScript 2017 Language Specification, per the linked answer. (It is 6.1.6.1.20 in the 2020 version.)

So while 5,726,718,050,568,503,296 is representable, printing it yields “5726718050568503000” because that suffices to distinguish it from the neighboring representable values, 5,726,718,050,568,502,272 and 5,726,718,050,568,504,320.

You can request more precision in the conversion to string with .toPrecision, as in x.toPrecision(21).

Eric Postpischil
  • 195,579
  • 13
  • 168
  • 312