We know that floating-point numbers have precision loss problems due to memory layout.
- In mathematics, binary cannot be 100% mapped to decimal decimals.
- Scientific notation may ignore some small intervals.
Why is it normal to print a floating point number 0.1, but it is inaccurate to print 0.3-0.2?
I have tried so many tests, many cases.