-3

I spent an hour today trying to figure out why

return abs(val-desired) <= 0.1

was occasionally returning False, despite val and desired having an absolute difference of <=0.1. After some debugging, I found out that -13.2 + 13.3 = 0.10000000000000142. Now I understand that CPUs cannot easily represent most real numbers, but this is an exception, because you can subtract 0.00000000000000142 and get 0.1, so it can be represented in Python.

I am running Python 2.7 on Intel Core architecture CPUs (this is all I have been able to test it on). I'm curious to know how I can store a value of 0.1 despite not being able to apply arithmetic to particular floating point values. val and desired are float values.

chazkii
  • 1,300
  • 12
  • 21
  • 1
    You could find this useful: http://floating-point-gui.de/ – BlackBear Aug 08 '13 at 11:20
  • 1
    *I understand that CPUs cannot easily represent most floating point numbers with high resolution* ... you misunderstand entirely, floating-point numbers are all [forget the integers for now] that a CPU can represent and they represent them to the full precision that they are capable of. Now, if you replaced *floating-point* with *real* your understanding would be more correcter. – High Performance Mark Aug 08 '13 at 11:20
  • 3
    This question is asked several times every day be it for python, C, Java or whatever. -1 for not searching for it before wasting other members' time. 0.1 cannot be repesesented exactly in floating-point (single or double precision) so there must be a misunderstanding or a mechanism outside the fpu which produces your exact 0.1 value. – Olof Forshell Aug 08 '13 at 11:31
  • @moooeeeep: It's not a duplicate of that question. Read the question again. The OP point out that this *is* understood. It's the accuracy of the result that is not understood. – Lennart Regebro Aug 08 '13 at 11:35
  • @BlackBear bookmarking this one, less heavy handed than the often-cited [Goldberg document](http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html) – fvu Aug 08 '13 at 11:42

3 Answers3

4

Yes, this can be a bit surprising:

>>> +13.3
13.300000000000001
>>> -13.2
-13.199999999999999
>>> 0.1
0.10000000000000001

All these numbers can be represented with some 16 digits of accuracy. So why:

>>> 13.3-13.2
0.10000000000000142

Why only 14 digits of accuracy in that case?

Well, that's because 13.3 and -13.2 have 16 digits of accuracy, which means 14 decimal points, since there are two digits before the decimal point. So the result also have 14 decimal points of accuracy. Even though the computer can represent numbers with 16 digits.

If we make the numbers bigger, the accuracy of the result decreases further:

>>> 13000.3-13000.2
0.099999999998544808

>>> 1.33E10-13.2E10
-118700000000.0

In short, the accuracy of the result depends on the accuracy of the input.

Lennart Regebro
  • 167,292
  • 41
  • 224
  • 251
  • I agree with your explanation, but I think the middle section (that contains the crux of your answer) is not really easy to understand. The thing is, an IEEE754 double can represent a total of 16 digits, which is the sum of the digits **left** and **right** of the decimal point. Identical to your `14 decimal points`, but maybe a bit clearer in explaining what's going on here? – fvu Aug 08 '13 at 11:40
  • 1
    Indeed, the important part to understand is not only that floating points have limited precision (a **relative** error of ~2e-16), but that this error gets amplified by subtracting two large numbers. Both numbers have an **absolute** error of around 13 * 2e-16, so the the difference will have a potential **absolute** error of 2 * 13 * 2e-16. Since the nominal solution of the subtraction is 0.1, this gives a **relative** error in the answer of 2 * 13 * 2e-16 / 0.1 = 5e-14. The small **relative** error in representing 13.3 gets thus amplified by a factor more than 100!. – Bas Swinckels Aug 08 '13 at 11:45
  • I updated it, I hope it got better. – Lennart Regebro Aug 08 '13 at 11:47
2

To directly address your question of "how do I store a value like 0.1 and do an exact comparison to it when I have imprecise floating-point numbers," the answer is to use a different type to represent your numbers. Python has a decimal module for doing decimal fixed-point and floating-point math instead of binary -- in decimal, obviously, 0.1, -13.2, and 13.3 can all be represented exactly instead of approximately; or you can set a specific level of precision when doing calculations using decimal and discard digits below that level of significance.

val = decimal.Decimal(some calculation)
desired = decimal.Decimal(some other calculation)
return abs(val-desired) <= decimal.Decimal('0.1')

The other common alternative is to use integers instead of floats by artificially multiplying by some power of ten.

return not int(abs(val-desired)*10)
llb
  • 1,671
  • 10
  • 14
1

"Now I understand that CPUs cannot easily represent most floating point numbers with high resolution", the fact you asked this question indicates that you don't understand. None of the real values 13.2, 13.3 nor 0.1 can be represented exactly as floating point numbers:

>>> "{:.20f}".format(13.2)
'13.19999999999999928946'
>>> "{:.20f}".format(13.3)
'13.30000000000000071054'
>>> "{:.20f}".format(0.1)
'0.10000000000000000555'
Duncan
  • 92,073
  • 11
  • 122
  • 156
  • I think the OP understands this. The question, as I understand it, is why the accuracy of the result is less than what the computer can represent. – Lennart Regebro Aug 08 '13 at 11:31