1

We know that floating-point numbers have precision loss problems due to memory layout.

  1. In mathematics, binary cannot be 100% mapped to decimal decimals.
  2. Scientific notation may ignore some small intervals.

Why is it normal to print a floating point number 0.1, but it is inaccurate to print 0.3-0.2? enter image description here

I have tried so many tests, many cases.

  • 1
    The number `0.1` can't be represented exactly in [IEEE-754](https://en.wikipedia.org/wiki/IEEE_754) floating-point representation. In a `float`, it is represented as `0.100000001490116119384765625`. In a `double`, it is represented as `0.100000000000000005551115123126`. You can test this [here](https://www.binaryconvert.com/convert_double.html). If it prints `"0.1"` with you, that merely means that the number you are printing is rounded before printing. – Andreas Wenzel Jul 02 '23 at 03:11
  • 1
    Rounding error. `0.3` and `0.2` are also just approximations. The expression `0.3 - 0.2` produces a number that is slightly different (probably by one bit) from the number `0.1`. – Tim Roberts Jul 02 '23 at 03:19
  • 1
    *"Why is there no loss of precision in direct assignment of floating point numbers?"* - In fact, there >is< loss of precision when a *literal* is assigned to a `double` or `float`. You can demonstrate this by trying to assign `0.10000000000000000000001` to a `float` or `double` variable and then printing it. (Actually, the loss of precision occurs at compile time ... but lets not split hairs.) – Stephen C Jul 02 '23 at 03:26
  • 2
    Anyway ... you are unlikely to understand this properly by doing experiments. If you want to understand, you need to do some reading. Read [What Every Computer Scientist Should Know About Floating-Point Arithmetic](http://download.oracle.com/docs/cd/E19957-01/806-3568/ncg_goldberg.html). Then read it again. – Stephen C Jul 02 '23 at 03:31
  • You have `0.5 > 0.3 > 0.25 > 0.2 > 0.125 > 0.1`. Thus the result of the subtraction requires bits in the last two places of the mantissa for 0.1 that are not present in 0.3 and 0.2. This invention of or default for these bits can be wrong, and here it obviously is. – Lutz Lehmann Jul 02 '23 at 07:10
  • The reason `println(0.1)` prints out "0.1" is explained by the answers to [this question](https://stackoverflow.com/questions/76588445/formatting-doubles-with-highest-possible-precision). – k314159 Jul 05 '23 at 16:30

0 Answers0