5

I have written the following code for generating a range with floats:

def drange(start, stop, step):
    result = []
    value = start
    while value <= stop:
        result.append(value)
        value += step
    return result

When calling this function with this statement:

print drange(0.1,1.0,0.1)

I expected to obtain this:

[0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]

But I obtain the following, instead:

[0.1, 0.2, 0.30000000000000004, 0.4, 0.5, 0.6, 0.7, 0.7999999999999999, 0.8999999999999999, 0.9999999999999999]

Why is this?, and how can I fix it?

martineau
  • 119,623
  • 25
  • 170
  • 301
Fernando Suárez
  • 67
  • 1
  • 1
  • 2
  • 1
    floats are not precise. please read this: http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html – timgeb Jul 10 '14 at 07:20
  • Here is a nice explanation on how floating point arithmetic works in Python https://docs.python.org/2/tutorial/floatingpoint.html – Yoann Quenach de Quivillic Jul 10 '14 at 07:24
  • 1
    @timgeb No, that's an atrocious resource for most people's needs. There are [better options](http://meta.stackoverflow.com/questions/260130/canonical-duplicate-for-floating-point-is-inaccurate#comment40519_260130). –  Jul 10 '14 at 07:27
  • I don't mind that most people, on encountering issues like this, don't know enough to search for topics like floating point representation. However *this exact sum, in Python*, already has an answered question. In future, *search* before asking. – jonrsharpe Jul 10 '14 at 07:29

1 Answers1

9

That's how floating-point numbers work. You can't represent an infinite number of real numbers in a finite number of bits, so there is some truncation. You should take a look at What Every Programmer Should Know About Floating-Point Arithmetic:

Why don’t my numbers, like 0.1 + 0.2 add up to a nice round 0.3, and instead I get a weird result like 0.30000000000000004?

Because internally, computers use a format (binary floating-point) that cannot accurately represent a number like 0.1, 0.2 or 0.3 at all.

When the code is compiled or interpreted, your “0.1” is already rounded to the nearest number in that format, which results in a small rounding error even before the calculation happens.

Use round(number, k) to round a given floating-point value to k digits after the decimal (so in your case, use round(number, 1) for one digit).

John Feminella
  • 303,634
  • 46
  • 339
  • 357