1

I have been using Python 2.7 since very soon after it was released. I just recently had problems with 2.7 not doing everything I needed, so I updated (finally). However, after a few days, I am noticing some problems with multiplication. But is it something I'm doing or a problem with Python itself?

>>> 12*0.1
1.2000000000000002

If I run a loop like this:

>>> for i in range ( -20, 20 ):
...     print ( i, i*.1 )
...

The output is:

-20 -2.0
-19 -1.9000000000000001
-18 -1.8
-17 -1.7000000000000002
-16 -1.6
-15 -1.5
-14 -1.4000000000000001
-13 -1.3
-12 -1.2000000000000002
-11 -1.1
-10 -1.0
-9 -0.9
-8 -0.8
-7 -0.7000000000000001
-6 -0.6000000000000001
-5 -0.5
-4 -0.4
-3 -0.30000000000000004
-2 -0.2
-1 -0.1
0 0.0
1 0.1
2 0.2
3 0.30000000000000004
4 0.4
5 0.5
6 0.60000000000000001
7 0.70000000000000001
8 0.8
9 0.9
10 0.10
11 0.11
12 0.120000000000000002
13 0.13
14 0.140000000000000001
15 0.15
16 0.16
17 0.170000000000000002
18 0.18
19 0.190000000000000002

When I do a loop like this, however:

>>> for i in range ( -20, 20 ):
...     print ( i, i/10 )
...

It prints out the correct numbers. I have even run the first loop with a range of +/- 1,000,000, and about 40% of the numbers end up this way. Why is this happening?

Alex Riley
  • 169,130
  • 45
  • 262
  • 238
Adrien Guerin
  • 23
  • 1
  • 3
  • Floats are inherently inaccurate. If you want exact results, consider the `fractions` module. – Kevin Feb 04 '15 at 20:38
  • This isn't specific to 3.x, btw. I get `1.2000000000000002` when I try your code in 2.7. – Kevin Feb 04 '15 at 20:42
  • 1
    possible duplicate of [Is floating point math broken?](http://stackoverflow.com/questions/588004/is-floating-point-math-broken) – MattDMo Feb 04 '15 at 20:56

2 Answers2

2

The reason for the difference is that integers can be represented accurately in binary, whereas many decimal numbers cannot (given finite memory).

The float 0.1 is an example of this:

>>> "%.32f" % 0.1
'0.10000000000000000555111512312578'

It's not exactly 0.1. So multiplying by the float 0.1 is not quite the same as dividing by 10. It gives a different result, as you observe:

>>> 14 / 10
1.4
>>> 14 * 0.1
1.4000000000000001

Of course, neither result here is exactly 1.4, it's just that the multiplication by the float 0.1 has a slightly greater margin of error than dividing by the integer 10. The difference between the two is just enough that the division gets rounded to one decimal place, but the multiplication does not.

Alex Riley
  • 169,130
  • 45
  • 262
  • 238
0

The problem is the floating point in python. As you can read on that link, python uses to round numbers, because they actually have a lot of digits. But when you multipli by .1 python asumes that you spect to see a point floating number, thats why the issue doesn't happen when you divide by 10.

However, you can format your numbers if you want

>>> format(math.pi, '.12g')  # give 12 significant digits
'3.14159265359'

>>> format(math.pi, '.2f')   # give 2 digits after the point
'3.14'

Hope that helps.

Yerko Palma
  • 12,041
  • 5
  • 33
  • 57
  • You should make it clear that the problem does not lie in *python* but that that is how floating points work, as used by all modern processors. – jepio Feb 04 '15 at 23:04