1

I am aware of how JavaScript numbers are all 64 bit floats, as defined in the IEEE 754 standard, and the ramifications of that when performing computations, as per:

Is floating point math broken?

(and many other questions)

What I don't understand is how JavaScript numbers somehow retain their original precision. As an example, if you assign a value of 0.1 to a variable, when viewing in the debugger, or outputting to the console it is reported as having exactly that value:

const foo = 0.1
console.log(foo); // reports 0.1

However, 0.1 is not representable as a double precision float, so that actual value of foo is 0.10000000149011612. Why is it not reported as that in the above example? And where is the additional precision information stored?

ColinE
  • 68,894
  • 15
  • 164
  • 232

0 Answers0