5

How does javascript convert numbers to strings? I expect it to round the number to some precision but it doesn't look like this is the case. I did the following tests:

> 0.1 + 0.2
0.30000000000000004
> (0.1 + 0.2).toFixed(20)
'0.30000000000000004441'
> 0.2
0.2
> (0.2).toFixed(20)
'0.20000000000000001110'

This is the behavior in Safari 6.1.1, Firefox 25.0.1 and node.js 0.10.21.

It looks like javascript displays the 17th digit after the decimal point for (0.1 + 0.2) but hides it for 0.2 (and so the number is rounded to 0.2).

How exactly does number to string conversion work in javascript?

martinkunev
  • 1,364
  • 18
  • 39
  • 1
    When in doubt, consult the Standard: [`Number#toFixed`](http://www.ecma-international.org/ecma-262/5.1/#sec-15.7.4.5) ? – DCoder Jan 29 '14 at 20:22
  • @DCoder "The output of toFixed may be more precise than toString for some values because toString only prints enough significant digits to distinguish the number from adjacent number values", thank you, I think this answers my question. – martinkunev Jan 29 '14 at 20:48

2 Answers2

3

From the question's author:

I found the answer in the ECMA script specification: http://www.ecma-international.org/ecma-262/5.1/#sec-9.8.1

When printing a number, javascript calls toString(). The specification of toString() explains how javascript decides what to print. This note below

The least significant digit of s is not always uniquely determined by the requirements listed in step 5.

as well as the one here: http://www.ecma-international.org/ecma-262/5.1/#sec-15.7.4.5

The output of toFixed may be more precise than toString for some values because toString only prints enough significant digits to distinguish the number from adjacent number values.

explain the basic idea behind the behavior of toString().

martinkunev
  • 1,364
  • 18
  • 39
0

This isn't about how javascript works, but about how floating-point operations work in general. Computers work in binary, but people mostly work in base 10. This introduces some imprecision here and there; how bad the imprecision is depends on how the hardware and (sometimes) software in question works. But the key is that you can't predict exactly what the errors will be, only that there will be errors.

Javascript doesn't have a rule like "display so many numbers after the decimal point for certain numbers but not for others." Instead, the computer is giving you its best estimate of the number requested. 0.2 is not something that can be easily represented in binary, so if you tell the computer to use more precision than it would otherwise, you get rounding errors (the 1110 at the end, in this case).

This is actually the same question as this old one. From the excellent community wiki answer there:

All floating point math is like this and is based on the IEEE 754 standard. JavaScript uses 64-bit floating point representation, which is the same as Java's double.

Community
  • 1
  • 1
elixenide
  • 44,308
  • 16
  • 74
  • 100
  • I'm familiar with how floating-point arithmetics works. My question was why some digits get displayed in some cases and not in others. Apperently 0.2 is not the computer's best estimate as the last line of the sample code demonstrates. – martinkunev Jan 29 '14 at 20:44
  • I may not have been clear. When you do arithmetic, like `0.1 + 0.2`, or when you use `.toFixed()`, you force the computer to abandon the fixed number of decimal places it was using. So, `0.2` may be `0.2`, but `0.1 + 0.2` makes the computer "forget" that it should only use 1 decimal place to display the answer. `0.2` is, in fact, the computer's best estimate of `0.2`, because it can "remember" when to stop calculating additional digits. But it isn't sophisticated enough at a native level to figure it out when you do floating-point arithmetic on that number. – elixenide Jan 29 '14 at 20:49
  • This is not the way javascript behaves as can be seen by executing 0.2 + 0.2 and (0.2 + 0.2).toFixed(20) – martinkunev Jan 29 '14 at 21:06
  • @martinkunev I'm not sure what you mean by "this." It is the way it works. Just because some examples produce the expected result doesn't mean that rounding errors aren't occurring. `0.2 + 9.0` also works (`9.2`), but `0.2+9.1` produces `9.299999999999999`. The point is that rounding errors do occur, but not in a predictable fashion such that a programmer can rely on more digits than are truly necessary. – elixenide Jan 29 '14 at 21:09
  • @martinkunev See my edit above; the question I linked to explains this well. – elixenide Jan 29 '14 at 21:13
  • Read my comments. There is no "fixed number of decimal places" in javascript. The cases where the addition appears exact contradict your explanation. The reason behind the experienced behavior is the way toString() works. The proof is in the ECMA standard (see my links). – martinkunev Jan 29 '14 at 21:41
  • 1
    @Ed Cottrel It looks like you are not reading the question carefully and you are just responding to one that has been asked a zillion times here - how floating point numbers are implemented. The question being asked here is why: javascript:0.30000000000000004 evaluates to 0.30000000000000004 but javascript:0.30000000000000001 evaluates to 0.3 – abikov Feb 07 '14 at 14:48
  • @abikov With all due respect, I read the question correctly; perhaps you misunderstood my answer. The point is, at *some level* of precision, floating point arithmetic is *always* imprecise, and one cannot predict exactly how imprecise it will be. Forcing a particular precision (like `.toFixed(20)`) risks exposing imprecision at the requested number of digits. This is why `0.2` is fine, but `0.1+0.2` or `(0.2).toFixed(20)` exposes the imprecision, which just happens to be around the 17th position after the decimal. – elixenide Feb 07 '14 at 15:05
  • 1
    @EdCottrell Here is an example without any arithmetic: ' > x = 0.30000000000000004441; 0.30000000000000004 > x.toFixed(20) 0.30000000000000004441 // No loss in precision up to 20 digits, the number is rounded to 17 digits. > x = 0.20000000000000001110; 0.2 > x.toFixed(20) 0.20000000000000001110 // No loss in precision up to 20 digits, BUT the number is NOT rounded to 17 digits, which would be 0.20000000000000001, instead it is rounded to 16 digits or less. ' – abikov Feb 07 '14 at 15:33