3

Give a decimal number 0.2

EX

var theNumber= 0.2;

I ASSUME it would be stored in memory as (based on double-precision 64-bit floating point format IEEE 754)

0-01111111100-1001100110011001100110011001100110011001100110011001

That binary number is actually rounded to fit 64 bit.

If we take that value and convert it back to decimal, we will have

0.19999999999999998

(0.1999999999999999833466546306226518936455249786376953125)

Not exactly 0.2

My question is, when we ask for decimal value of theNumber (EX: alert(theNumber)), how does JavaScript runtime know theNumber is originally 0.2?

Uwe Keim
  • 39,551
  • 56
  • 175
  • 291
vothaison
  • 1,646
  • 15
  • 15
  • 1
    Interesting question. Never actually thought of that but it seems that JS will still hold the original value, too. Moreover, if you do `theNumber + 0` you still get `0.2` as the result, so the `+ 0` is apparently a no-op. However `theNumber + 1 - 1` is now incorrect because it does use the underlying value for mathematical operations. – VLAZ Nov 07 '18 at 11:27
  • 1
    The value you get for `0.2` for me in Chrome is -> `001100110011001100110011001100110011001100110011001101` If you do -> `theNumber.toFixed(54)` you will get `0.200000000000000011102230246251565404236316680908203125`, Doing the +1 -1 as @vlaz You will then get `0.199999999999999955591079014993738383054733276367187500` So to me it looks like the default rendering for a number has some standard truncating, to how many decimals this is I'v not been able to find, it's maybe somewhere in the specs. – Keith Nov 07 '18 at 11:32
  • 1
    Where did you get 0.1999999999999999833466546306226518936455249786376953125 for the result of `0.2`? The correct value is 0.200000000000000011102230246251565404236316680908203125. – Eric Postpischil Nov 07 '18 at 12:49
  • Thanks, guys. Looks like I gotta read more specs. – vothaison Nov 07 '18 at 17:40
  • @eric, i thought 0.2 actually went into memory. – vothaison Nov 07 '18 at 17:41
  • So the binary value is rounded up? Producing the 0.20000...125? – vothaison Nov 07 '18 at 18:05

3 Answers3

3

JavaScript’s default conversion of a Number to a string produces just enough decimal digits to uniquely distinguish the Number. (This arises out of step 5 in clause 7.1.12.1 of the ECMAScript 2018 Language Specification, which I explain a little here.)

Let’s consider the conversion of a decimal numeral to a Number first. When a numeral is converted to a Number, its exact mathematical value is rounded to the nearest value representable in a Number. So, when 0.2 in source code is converted to a Number, the result is 0.200000000000000011102230246251565404236316680908203125.

When converting a Number to decimal, how many digits do we need to produce to uniquely distinguish the Number? In the case of 0.200000000000000011102230246251565404236316680908203125, if we produce “0.2”, we have a decimal numeral that, when again converted to Number, the result is 0.200000000000000011102230246251565404236316680908203125. Thus, “0.2” uniquely distinguishes 0.200000000000000011102230246251565404236316680908203125 from other Number values, so it is all we need.

In other words, JavaScript’s rule of producing just enough digits to distinguish the Number means that any short decimal numeral when converted to Number and back to string will produce the same decimal numeral (except with insignificant zeros removed, so “0.2000” will become “0.2” or “045” will become “45”). (Once the decimal numeral becomes long enough to conflict with the Number value, it may no longer survive a round-trip conversion. For example, “0.20000000000000003” will become the Number 0.2000000000000000388578058618804789148271083831787109375 and then the string “0.20000000000000004”.)

If, as a result of arithmetic, we had a number close to 0.200000000000000011102230246251565404236316680908203125 but different, such as 0.2000000000000000388578058618804789148271083831787109375, then JavaScript will print more digits, “0.20000000000000004” in this case, because it needs more digits to distinguish it from the “0.2” case.

Eric Postpischil
  • 195,579
  • 13
  • 168
  • 312
2

In fact, 0.2 is represented by other bit sequence than you posted.
Every time your result will match correct bit sequence, console will output 0.2. But if your calculation results in other sequence, console will output something like your 0.19999999999999998.

Similar situation is with most common example 0.1 + 0.2 which gives output 0.30000000000000004 because bit sequence for this result is different than in 0.3's representation.

console.log(0.2)
console.log(0.05 + 0.15)
console.log(0.02 + 0.18)

console.log(0.3)
console.log(0.1 + 0.2)
console.log(0.05 + 0.25)

From ECMAScript Language Specification:

11.8.3.1 Static Semantics: MV
A numeric literal stands for a value of the Number type. This value is determined in two steps: first, a mathematical value (MV) is derived from the literal; second, this mathematical value is rounded [...(and here whole procedure is described)]

You may be also interested in following section:

6.1.6 Number type
[...]
In this specification, the phrase “the Number value for x” where x represents an exact real mathematical quantity [...] means a Number value chosen in the following manner.
[...(whole procedure is described)]
(This procedure corresponds exactly to the behaviour of the IEEE 754-2008 “round to nearest, ties to even” mode.)

barbsan
  • 3,418
  • 11
  • 21
  • 28
  • Thanks, Barb. i thought 0.2 was the one that went into memory. So, first it is "tranformed" into its true number form. I'm gonna read more specs. – vothaison Nov 07 '18 at 17:47
  • Wait. So the binary is rounded up? Not truncated? – vothaison Nov 07 '18 at 18:03
  • 1
    0.2 is only human-readable representation, in fact all numbers are kept in binary form. Binary is neither rounded nor truncated - it's decimal representation is rounded – barbsan Nov 08 '18 at 07:44
  • 1
    @vothaison I've added some references – barbsan Nov 08 '18 at 08:08
  • I have seen those pages, but didn't understand it until after reading answers/comments from you guys. Thanks. :D – vothaison Nov 08 '18 at 08:28
0

So, my ASSUMPTION is wrong.

I have written a small program to do the experiment.

The binary value that goes to memory is not

0-01111111100-1001100110011001100110011001100110011001100110011001

The mantissa part is not 1001100110011001100110011001100110011001100110011001

It got that because I truncated the value, instead of rounding it. :((

1001100110011001100110011001100110011001100110011001...[1001] need to be rounded to 52 bit. Bit 53 if the series is a 1, so the series is rounded up and becomes: 1001100110011001100110011001100110011001100110011010

The correct binary value should be:

0-01111111100-1001100110011001100110011001100110011001100110011010

The full decimal of that value is:

0.200 000 000 000 000 011 102 230 246 251 565 404 236 316 680 908 203 125

not

0.199 999 999 999 999 983 346 654 630 622 651 893 645 524 978 637 695 312 5

And as Eric's answer, all decimal numbers, if are converted to the binary

0-01111111100-1001100110011001100110011001100110011001100110011010

will be "seen" as 0.2 (unless we use toFixed() to print more digits); all those decimal numbers SHARE the same binary signature (i really don't know how to describe it).

vothaison
  • 1,646
  • 15
  • 15