3

How does it work internally?

How does it decide to convert 0.29999999999999998 to 0.3, even though 0.3 cannot be represented in binary?

Here are some more example:

scala> 0.29999999999999998
res1: Double = 0.3

scala> 0.29999999999999997
res2: Double = 0.3

scala> 0.29999999999999996
res3: Double = 0.29999999999999993

scala> 0.29999999999999995
res4: Double = 0.29999999999999993
elm
  • 20,117
  • 14
  • 67
  • 113
Tanin
  • 1,853
  • 1
  • 15
  • 20
  • 2
    Possible duplicate of [Is floating point math broken?](http://stackoverflow.com/questions/588004/is-floating-point-math-broken) – tmyklebu Dec 04 '15 at 05:29
  • 1
    I went to the link and understand why 0.3 is represented as 0.2999999.. but it doesn't explain how machine converts 0.2999999.. to 0.3? (e.g. does it has a mapping table stored somewhere?) – Tanin Dec 04 '15 at 05:56

2 Answers2

13

There are two conversions involved.

First 0.29999999999999998 is converted to 0.299999999999999988897769753748434595763683319091796875, the nearest representable number.

Next, 0.299999999999999988897769753748434595763683319091796875 is converted to decimal for printing. 0.3 is also one of the numbers that converts to 0.299999999999999988897769753748434595763683319091796875, and it is the one that gets printed because it is so short.

Every finite double number is exactly representable as a decimal fraction. Generally, default output does not attempt to print the exact value, because it can be very long - far longer than the example above. A common choice is to print the shortest decimal fraction that would convert to the double on input. Both conversions are done using non-trivial algorithms. See Algorithm to convert an IEEE 754 double to a string? for some discussion and references to output algorithms.

==============================================================

There has been some discussion in comments on the value 0.30000000000000004. I agree with the comments by Rick Regan and Jesper, but thought it might be useful to add to this answer.

The exact value of the closest double to 0.30000000000000004 is 0.3000000000000000444089209850062616169452667236328125. All decimal numbers in the range [0.3000000000000000166533453693773481063544750213623046875, 0.3000000000000000721644966006351751275360584259033203125] convert to that value, and no numbers even slightly outside that range do so. 0.3000000000000000 is outside the range, so it does not have enough digits. 0.30000000000000004 is inside the range, so there is no need for more digits to correctly identify the double.

Community
  • 1
  • 1
Patricia Shanahan
  • 25,849
  • 4
  • 38
  • 75
  • 1
    Why doesn't it simply print the nearest representable number instead of printing 0.3? also, how does it go from 0.299999999999999988897769753748434595763683319091796875 to 0.3? What mechanism does it use? Thanks! – Tanin Dec 04 '15 at 06:02
  • Just to be sure I understand it correctly -- 0.300000000000000044408920985006 is what the machine stores --- but the machine always output 0.30000000000000004 because of the algorithm, right? – Tanin Dec 04 '15 at 06:25
  • Can you point to me somewhere that explains why 17 decimal positions is chosen to be the output format? – Tanin Dec 04 '15 at 06:27
  • @Tanin because a `double`, which is in 64-bit [IEEE 754 format](https://en.wikipedia.org/wiki/IEEE_floating_point), has a precision of approximately 16 decimal digits. Do some research on how floating-point numbers work on computers (click the link). – Jesper Dec 04 '15 at 07:53
  • @Tanin The mechanism is rounding -- in decimal. The underlying printing algorithm effectively takes the exact decimal representation of the floating-point number and rounds it to the lowest place it can and still have you get the original floating-point number if you convert again. – Rick Regan Dec 04 '15 at 13:55
  • 1
    @Tanin Here's one place you can read why 17 is the "magic number" (an article I wrote earlier this year): http://www.exploringbinary.com/number-of-digits-required-for-round-trip-conversions/ – Rick Regan Dec 04 '15 at 14:02
  • @RickRegan I'm not sure I understand it correctly. But the reason is that we want consistency when converting back and forth between base 2 and base 10, right? That's why we truncate it to 15-17 decimal places. – Tanin Dec 10 '15 at 07:35
  • @Jesper thank you very much. After re-reading everything from you and RickRegan multiple times, I think I get almost all of it. – Tanin Dec 10 '15 at 07:36
  • @Tanin Printing to 17 decimal digits guarantees you can take that number and convert it back to the same double. The 15 decimal digit limit means any number with 15 significant digits or less, converted to double and then printed to 15 digits, will give you your original number. (I don't know if that's what you mean by "consistency"). (BTW: rounding is done, not truncation.) – Rick Regan Dec 10 '15 at 17:10
  • @RickRegan That's exactly what I wanted to say, but it's a better explanation. Thank you again! – Tanin Dec 10 '15 at 18:05
1

Note in Scala Double (see IEEE 754 Standard and IEEE Floating-Point Arithmetic), the original declared value is rounded up to nearest,

val x = 0.29999999999999998
x: Double = 0.3

"0.29999999999999998".toDouble
Double = 0.3

in as much as

0.2999999999999999999999999999999999999999999999999999999999998
Double = 0.3

Also in BigDecimal for arbitrary precision decimal floating-point representation (see API), the original value of type Double (parameter to the constructor) is first rounded up, namely

BigDecimal(0.29999999999999998) == 0.3
Boolean = true

BigDecimal(0.29999999999999998)
scala.math.BigDecimal = 0.3

However a textual declaration of the original value is not interpreted as Double and hence rounded up,

BigDecimal("0.29999999999999998") == 0.3
Boolean = false

namely,

BigDecimal("0.29999999999999998")
scala.math.BigDecimal = 0.29999999999999998
elm
  • 20,117
  • 14
  • 67
  • 113