1

Consider a simple code snippet:

let a = 0.1;
let b = 0.2;
let c = 0.3;
let d = a + b;

console.log(c.toString()); //0.3
console.log(d.toString()); //0.30000000000000004

The explanation I could find was that 0.3 cannot be exactly represented in double precision floating point number. But if that is true, how can c hold the value 0.3 without losing precision in case of direct assignment?

Hemant
  • 19,486
  • 24
  • 91
  • 127

1 Answers1

3

Neither can be represented precisely. Use this converter to see what's going on.

JavaScript numbers are represented as double-precision 64-bit numbers (IEEE754)

0.3 alone gets interpreted as 0x3FD3333333333333, which is, if you calculate on hand:

.299999999999999988897769753748...

When displayed as a decimal, this gets rounded off to 0.3. It's not that the value stored is equal precisely to 0.3 (which can be shown if you try to do more calculations with it), but that, when displayed, it's shown as 0.3 - it's closer to 3 than it is to the next bit, which is at 2.9999999999999993.

0.1 + 0.2 is off by a bit - the last digit (in hex) is 4, not 3. 0x3FD3333333333334 is:

.300000000000000044408920985006...

Similarly, you can't use 0.30000000000000002 with direct assignment or direct logging, because that's between the two bits, and the interpreter must choose one or the other:

console.log(0.30000000000000002);
CertainPerformance
  • 356,069
  • 52
  • 309
  • 320
  • Thanks. It does make sense. What makes `toString` show `.2999999999999999888` as `0.3` but `.3000000000000000444...` as `.30000000000000004`? – Hemant Oct 25 '21 at 04:54
  • 1
    I'm wondering that too. I'd have thought that .299999999999999989 (which is past the point of display precision) would be rounded for display to .29999999999999999 (17 decimal places, just like with .30000000000000004), but instead it's shown as 0.3. Maybe these sorts of numbers that have no closer representation to an even decimal number (eg 0.3, 0.257, etc) are shown *as that decimal number*, since those decimals are more likely to be "accurate" as perceived by the script-writer as the technically more precise numbers with lots of trailing digits. – CertainPerformance Oct 25 '21 at 05:04
  • 1
    IOW: "For the number as closest to 0.3 as can be represented, should we display 0.3 or the more precise extended decimal?" Looks like they chose to show 0.3 - same sort of thing for other decimal numbers. – CertainPerformance Oct 25 '21 at 05:07
  • Interesting. But why the same argument cannot be applied to `.299999999999999989` also and show as `0.3`? And I see the same behaviour in .NET/C# also. Perhaps there is a standard to display values as well? – Hemant Oct 25 '21 at 05:13
  • 1
    Probably because .299999999999999989, when represented in the engine, is indistinguishable from the value that .3 represents. Yeah, there's probably a display standard or common algorithm or something like that. – CertainPerformance Oct 25 '21 at 05:15
  • @CertainPerformance: Re “the value that .3 represents”: That is not a good way to think about floating-point numbers. .3 does not represent a binary floating-point number, and no binary floating-point number represents .3. Per the IEEE-754 floating-point standard, floating-point numbers do not approximate real numbers. Rather, each floating-point number represents **exactly** one real number. It is the operations that approximate real arithmetic, not the numbers. When a floating-point operation is performed, it produces a result rounded to the nearest representable number. – Eric Postpischil Oct 25 '21 at 11:33
  • @CertainPerformance: What this means is that when .3 is converted to the IEEE-754 double precision format, the result is 0.299999999999999988897769753748434595763683319091796875. That does not represent .3; it represents 0.299999999999999988897769753748434595763683319091796875. Using this model is essential for understanding, analyzing, debugging, and designing floating-point software. – Eric Postpischil Oct 25 '21 at 11:34
  • @EricPostpischil Yeah, that was imprecise wording on my part. Should have said "the value 0.3 is interpreted as". – CertainPerformance Oct 25 '21 at 12:59