0

I understand that with the IEEE representation (or any binary representation) for double one can't represent 0.1 with a finite amount of bits.

I have two questions with this in mind:

  1. When C++ also uses the same standard for double why doesn't it mess up 0.1 + 0.2 like JavaScript does?
  2. Why does JavaScript print console.log(0.1) correctly when it can't accurately hold it in memory?
batman
  • 5,022
  • 11
  • 52
  • 82
  • C++ as a language does not necessarily use the same standard for floating-point numbers. It's implementation-defined; see http://stackoverflow.com/questions/5777484/how-to-check-if-c-compiler-uses-ieee-754-floating-point-standard for a more detailed discussion on this topic. – Christian Hackl Jul 27 '14 at 09:34

2 Answers2

4

There are at least three reasonable choices for conversion of floating point numbers to strings:

  1. Print the exact value. This seems like an obvious choice, but has downsides. The exact decimal value of a finite floating point number always exists, but may have hundreds of significant digits, most of which are of no practical use. java.util.BigDecimal's toString does this.
  2. Print just enough digits to uniquely identify the floating point number. This is the choice made, for example, in Java for default conversion of double or float.
  3. Print few enough digits to ensure that most output digits will be unaffected by rounding error on most simple calculations. That was the choice made in C.

Each of these has advantages and disadvantages. A choice 3 conversion will get "0.3", the "right" answer, for the result of adding 0.1 and 0.2. On the other hand, reading in a value printed this way cannot be depended on to recover the original float, because multiple floating point values map to the same string.

I don't think any of these options is "right" or "wrong". Languages typically have ways of forcing one of the non-default options, and that should be done if the default is not the best choice for a particular output.

Patricia Shanahan
  • 25,849
  • 4
  • 38
  • 75
  • Note that for interpreted languages which are used with Read Eval Print Loop, solution 2) is quite an obvious choice though... – aka.nice Jul 27 '14 at 14:29
0

Because it prints x to n decimal places. That happens to be correct

Ed Heal
  • 59,252
  • 17
  • 87
  • 127
  • Where the actual value of *n* is irrelevant. You cannot print the actual (internal) value of "0.1" in decimal notation, just as you cannot print the result of `1/3` with 100% accuracy. – Jongware Jul 27 '14 at 08:43
  • 2
    @Jongware The actual, internal value of 0.1 is 0.1000000000000000055511151231257827021181583404541015625. Every finite IEEE 754 floating point number is exactly representable in decimal, because every power of 2 is a factor of some power of 10. 3, on the other hand, is not a factor of any power of 10. – Patricia Shanahan Jul 27 '14 at 08:57
  • Hmmm .. thanks for proving me wrong! I could not have been more surprised if you managed to print `1/3` as well! I believed the problem was that some values in *binary* had repeating digits, just as 1/3 and 1/7 have in decimals. – Jongware Jul 27 '14 at 17:00
  • Ah, wait. The number of digits that can be stored in a `float` or `double` is limited. Although I guess I'm still wondering that you can confidentally state "the internal value **is** xxx". Surely there must be an internal memory limit as well? – Jongware Jul 27 '14 at 17:31
  • 2
    You ask the computer to store the number. It does its best. Then you ask it to print it. It does its best. That happens to be correct for the computer – Ed Heal Jul 27 '14 at 18:30