8

enter image description here

If I understand correctly, JavaScript numbers are always stored as double precision floating point numbers, following the international IEEE 754 standard. Which mean it uses 52 bits for fraction significand. But in the picture above, it seems like 0.57 in binary uses 54 bits.

Another thing is (if I understand correctly) 0.55 in binary is also an repeating number. But why 0.55 + 1 = 1.55 (no loss) and 0.57 + 1 = 1.5699999999999998

4"

hungneox
  • 9,333
  • 12
  • 49
  • 66
  • 4
    Possible duplicate of [Is floating point math broken?](https://stackoverflow.com/questions/588004/is-floating-point-math-broken) – phuzi Mar 21 '19 at 12:52
  • I guess `.toString(2)` does rounding during the conversion ... – Jonas Wilms Mar 21 '19 at 12:59
  • 1
    You're fighting the vagaries of JS Math library implementations in different browsers. Additionally, machine (hardware) architecture may enforce different standards for internal representations. Add the wackiness of floating-point math implementations on different OSs, and you have almost no hope of answering your question in general. Which processor (cpu), which OS, which browser...having all of those, you MAY be able to answer your question for that specific combination. – Richard Uie Mar 21 '19 at 14:07
  • @RichardUie Your comment is also an answer :) – hungneox Mar 21 '19 at 14:12
  • 2
    @RichardUie: JavaScript implements ECMA-262, and ECMA-262 specifies the `Number` format and its arithmetic. These operation in this question do not differ between different correct implementations of JavaScript. – Eric Postpischil Mar 22 '19 at 00:47
  • 1
    @phuzi: No, it is not a duplicate. The display behavior asked about here arises due to the ECMA-262 specification, not due to floating-point generally. – Eric Postpischil Mar 22 '19 at 00:48
  • @Eric 1) JavaScript is not an implementation - it's another specification; 2) browsers comply to whatever extent their authors decide; 3) browser compliance places no absolute obligations on the underlying OS; 4) OS math libraries can not magically overpower physical architectural limitations. – Richard Uie Mar 22 '19 at 02:00
  • @RichardUie: 1) The word “implementation” is not limited to a program or machine that implements a specification. It may be used to describe an abstract thing, including a specification of a programming language, that conforms to another thing. In any case, the terminolgy is unimportant. The point is JavaScript conforms to ECMAScript. 2) I said that **correct** implementations, meaning conforming implementations, do not differ. 3) This is not relevant. – Eric Postpischil Mar 22 '19 at 02:07
  • @RichardUie: 4) The machines we use are, for practical purposes, Universal Turing Machines. The software can be designed to do arbitrary computations regardless of how easy the underlying hardware makes it. It is certainly possible for math software to provide functions different from those directly provided by hardware. Furthermore, none of the behaviors asked about in the question deviate from what is produced by an implementation that conforms to ECMAScript; the observations are explained by ECMAScript. – Eric Postpischil Mar 22 '19 at 02:10
  • @Eric 1) The machines we use are, for practical purposes, Universal Turing Machines. - pointless use of jargon "Universal Turing Machine, since it's an abstract concept. 2) The software can be designed to do arbitrary computations regardless of how easy the underlying hardware makes it. - The fact that it CAN does not mean that it MUST or ever DOES...also, "Turing Complete," the more correct usage, means *can do anything another machine can do* NOT do ANYTHING. Given your perfect understand of the explicability of these artifacts strictly in ECMA terms, where's your explanation? – Richard Uie Mar 22 '19 at 03:45

3 Answers3

9

Which mean it uses 52 bits for fraction significand. But in the picture above, it seems like 0.57 in binary uses 54 bits.

JavaScript’s Number type, which is essentially IEEE 754 basic 64-bit binary floating-point, has 53-bit significands. 52 bits are encoded in the “trailing significand” field. The leading bit is encoded via the exponent field (an exponent field of 1-2046 means the leading bit is one, an exponent field of 0 means the leading bit is zero, and an exponent field of 2047 is used for infinity or NaN).

The value you see for .57 has 53 significant bits. The leading “0.” is produced by the toString operation; it is not part of the encoding of the number.

But why 0.55 + 1 = 1.55 (no loss) and 0.57 + 1 = 1.5699999999999998.

When JavaScript is formatting some Number x for display with its default rules, those rules say to produce the shortest decimal numeral (in its significant digits, not counting decorations like a leading “0.”) that, when converted back to the Number format, produces x. Purposes of this rule include (a) always ensuring the display uniquely identifies which exact Number value was the source value and (b) not using more digits than necessary to accomplish (a).

Thus, if you start with a decimal numeral such as .57 and convert it to a Number, you get some value x that is a result of the conversion having to round to a number representable in the Number format. Then, when x is formatted for display, you get the original number, because the rule that says to produce the shortest number that converts back to x naturally produces the number you started with.

(But that x does not exactly represent 0.57. The nearest double to 0.57 is slightly below it; see the decimal and binary64 representations of it on an IEEE double calculator).

On the other hand, when you perform some operation such as .57 + 1, you are doing some arithmetic that produces a number y that did not start as a simple decimal numeral. So, when formatting such a number for display, the rule may require more digits be used for it. In other words. when you add .57 and 1, the result in the Number format is not the same number as you get from 1.57. So, to format the result of .57 + 1, JavaScript has to use more digits to distinguish that number from the number you get from 1.57—they are different and must be displayed differently.


If 0.57 was exactly representable as a double, the pre-rounding result of the sum would be exactly 1.57, so 1 + 0.57 would round to the same double as 1.57.

But that's not the case, it's actually 1 + nearest_double(0.57) =
1.569999999999999951150186916493 (pre-rounding, not a double) which rounds down to
1.56999999999999984012788445398. These decimal representations of numbers have many more digits than we need to distinguish 1ulp (unit in the last place) of the significand, or even the 0.5 ulp max rounding error.

1.57 rounds to ~1.57000000000000006217248937901, so that's not an option for printing the result of 1 + 0.57. The decimal string needs to distinguish the number from adjacent binary64 values.


It just so happens that the rounding that occurs in .55 + 1 yields the same number one gets from converting 1.55 to Number, so displaying the result of .55 + 1 produces “1.55”.

Eric Postpischil
  • 195,579
  • 13
  • 168
  • 312
  • 1
    I don't think you explicitly pointed out that the nearest `double` to `0.57` does not exactly represent that number; some people might miss the fact that printing out as `0.57` doesn't mean it's exactly `0.57`. That's why `1 + 0.57` can be a different number than `1.57`; if `0.57` could be stored exactly, the sum that needs to be rounded to the nearest double would be exactly `1.57`, same as the decimal literal. – Peter Cordes Mar 22 '19 at 02:07
  • @PeterCordes: Feel free to edit; I will not mind. I am on mobile until tomorrow afternoon, so adding detailed examples would be tedious. I will take another look then. – Eric Postpischil Mar 22 '19 at 02:13
  • Done, added decimal representations of `double(0.57)`, `1 + double(0.57)` (pre and post rounding to `double`), and `double(1.57). – Peter Cordes Mar 22 '19 at 02:35
3

toString(2) prints string up to last non-zero digit.

1.57 has different bit representation than 1 + 0.57 ( but it's not impossible to get result 1.57),
but 1 + 0.55 in binary equals 1.55 as you can see in snippet below:

console.log(1.57)
console.log(1.57.toString(2))
console.log((1+.57).toString(2))
console.log("1.32 + 0.25 = ",1.32 + .25)
console.log((1.32 + .25).toString(2))
console.log(1.55)
console.log(1.55.toString(2))
console.log((1+.55).toString(2))

Remember that computer performs operations on binary numbers, 1.57 or 1.55 is just a human-readable output

barbsan
  • 3,418
  • 11
  • 21
  • 28
1

Number.prototype.toString roughly implements the following section of the ES262 spec:

7.1.12.1 NumberToString(m)

let n, k, and s be integers such that k ≥ 1, 10 ** k-1 ≤ s < 10 ** k,

the Number value for s × 10 ** n-k is m,

and k is as small as possible.

Therefore toString just estimates the value, it does not return the exact bytes stored.

What you see in the console is not an exact representation either.

Community
  • 1
  • 1
Jonas Wilms
  • 132,000
  • 20
  • 149
  • 151