0

I know why decimal can not be represented in floating-point accurately. I also know it will be rounded when convert decimal to floating-point according to IEEE 754.

But, now that the actual floating-point is not accurate, for example decimal 0.1 will be rounded in floating-point: 0,01111111011,1001100110011001100110011001100110011001100110011010

the infinite representation is: 0.000110011001...(infinte 1001)

When we print the 0.1 in decimal, the floating-point representation should be convert to decimal, the question is how the rounded floating-point representation can be rounded "accurately"?

I find a similar question here, @Guffa say:

it's close enough so that when the least significant digits are rounded off to display the value

But I don't understand how the rounding works when converting floating-point back to decimal

I do not know if I have made it clear, but I really want to know why.

Thanks.

Community
  • 1
  • 1

0 Answers0