0

Can someone please explain this to me?

I was writing a def (program) to convert text into numbers and wanted to ensure if it was just ints, the representation was ints and if floats, floats. If a mix, it defaulted to floats. Testing it on summations produced some interesting things. As I tested more, it got stranger still.

If it was always one way of the other, maybe I could address, but it's inconsistent as far as I can see. I've heard of this being a concern and there being libs that address the desired state (decimal types), but why does this happen? This kind of thing concerns me. Should I be?

Examples below range from "yep, that makes sense" to "huh?" to "how in the ???". And these happen within numbers in close proximity. I mean when it's 5.8 vs. 6.8 and you get that delta in the result. WT???

TIA for any insights. I'm sure this is old news somewhere :)

All run from the prompt although it's the same from code. Using Python 3.8.2 Some examples:

-2 + 4.5 => 2.5 "yep, that makes sense"

-6.8 + 8 => 1.2000000000000002 "huh?"

-2+3.8 => 1.7999999999999998 "how in the ???"

-5.8+8 => 2.2

-7.8+8 => 0.20000000000000018

-8.8+8 => -0.8000000000000007

-4.8+8 => 3.2

-4-3.8+8 => 0.20000000000000018

-4+3.8 => -0.20000000000000018

-3+3.8 => 0.7999999999999998

-1+3.8 => 2.8

LPLP1313
  • 49
  • 2
  • Does this answer your question? [Is floating point math broken?](https://stackoverflow.com/questions/588004/is-floating-point-math-broken) – chtz May 08 '20 at 22:48

1 Answers1

0

When −6.8 is converted to the IEEE-754 64-bit binary floating-point format, it is not representable, so the nearest representable value is produced instead. This value is −6.CCCCCCCCCCCCC16. When 8 is added, the result is 1.333333333333416. In decimal, that is 1.20000000000000017763568394002504646778106689453125. Some software displays that as 1.2000000000000002.

Note that when 1.2 is converted to this format, the result is 1.333333333333316, not 1.333333333333416. This is different from −6.8+8 because, with −6.8, the rounding had to occur at the 2−50 bit position, because representing −6.8 requires the bits start with the 22 position, and there are 53 bits in the significand (the part of the floating-point representation that represents the “fraction” part of the number). With 1.2, the first bit is in the 20 position, and the rounding occurs at the 2−52 position. Thus, converting 1.2 to the floating-point format produces a result closer to 1.2 than −6.8+8 does.

When displaying floating-point numbers, some software, for its default formatting, produces just as many decimal digits as are needed to uniquely distinguish the floating-point number from its neighboring representable values. When 1.2 is converted to 1.333333333333316 and then formatted as decimal, “1.2” is produced because that uniquely distinguishes 1.333333333333316. But when 1.333333333333416 is formatted, it is necessary to produce “ 1.2000000000000002” to distinguish it from 1.333333333333316.

Your other examples are similar. In cases like −5.8+8, the rounding happens to work out to get the same result as you would get from converting 2.2 directly, so then the output is “2.2”. In other cases, the rounding works out a little differently, and you get a different output.

Eric Postpischil
  • 195,579
  • 13
  • 168
  • 312