2

The Python documentation on floats state

0.1

0.1000000000000000055511151231257827021181583404541015625

That is more digits than most people find useful, so Python keeps the number of digits manageable by displaying a rounded value instead

0.1

What are the rules surrounding which floats get rounded for display and which ones don't? I've encountered some funny scenarios where

1.1+2.2 returns 3.3000000000000003 (unrounded)

but

1.0+2.3 returns 3.3 (rounded)

I know that the decimal module exists for making these things consistent, but am curious as to what determines the displayed rounding in floats.

tom
  • 2,335
  • 1
  • 16
  • 30
  • 4
    The rules change depending on which version of Python you're using. See for example http://stackoverflow.com/questions/25898733/why-does-strfloat-return-more-digits-in-python-3-than-python-2 – Mark Ransom Oct 27 '14 at 16:57
  • @MarkRansom Considering that there is a particularly good answer there, I don't know whether to consider this question a duplicate or to attempt an answer that tries to be more down-to-earth. – Pascal Cuoq Oct 27 '14 at 17:06
  • 1
    @PascalCuoq I get the nagging feeling that the answer there isn't quite complete for the question asked here. And it certainly isn't in an easy-to-digest format. – Mark Ransom Oct 27 '14 at 17:12

1 Answers1

4

What are the rules surrounding which floats get rounded for display and which ones don't? I've encountered some funny scenarios where

1.1+2.2 returns 3.3000000000000003 (unrounded)

but

1.0+2.3 returns 3.3 (rounded)

Part of the explanation is of course that 1.1 + 2.2 and 1.0 + 2.3 produce different floating-point numbers, and part of the explanation for that is that 1.1 is not really 11/10, 2.2 not really 22/10, and of course floating-point + is not rational addition either.

Many modern programming languages, including the most recent Python variations, when displaying a double-precision floating-point value d, show exactly the number of decimal digits necessary for the decimal representation, re-parsed as a double, to be converted again exactly to d. As a consequence:

  1. there is exactly one floating-point value that prints as 3.3. There cannot be two, because they would have to be the same by application of the definition, and there is at least one because if you convert the decimal representation 3.3 to a double, you get a double that has the property of producing the string “3.3” when converted to decimal with the algorithm in question.

  2. the values are rounded for the purpose of showing them with decimal digits, but they otherwise remain the numbers that they are. So some of the “rules” that you are asking for are rules about how floating-point operations are rounded. This is very simple but you have to look at the binary representation of the arguments and results for it to be simple. If you look at the decimal representations, the rounding looks random (but it isn't).

  3. the numbers only have a compact representation in binary. The exact value may take many decimal digits to represent exactly. “3.3000000000000003” is not “unrounded”, it is simply rounded to more digits than “3.3”, specifically, just exactly the number of digits necessary to distinguish that double-precision number from its neighbor (the one that is represented by “3.3”). They are in fact respectively the numbers below:

3.29999999999999982236431605997495353221893310546875
3.300000000000000266453525910037569701671600341796875

Of the two, 33/10 is closest to the former, so the former can be printed as “3.3”. The latter cannot be printed as “3.3”, and neither can it be printed as “3.30”, “3.300”, …, “3.300000000000000” since all these representations are equivalent and parse back to the floating-point number 3.29999999999999982236431605997495353221893310546875. So it has to be printed as “3.3000000000000003”, where the 3 is obtained because the digit 2 is followed by 6.

Pascal Cuoq
  • 79,187
  • 7
  • 161
  • 281