1

When dividing a float by 100 in Python 2.7 I get the following "rounding behaviour":

>>> 3.7e-03/100
3.7000000000000005e-05

I would expect the following:

>>> 3.7e-03/100
3.7e-05

Note that:

>>> 3.7e-03/100 == 3.7e-05
False
>>> 3.7000000000000005e-05 == 3.7e-05
False

While probably of not practical difference in most applications I find this behaviour somewhat disconcerting.

Why does this happen and how can I avoid it?

I am using Python: '2.7.5 |Anaconda 1.7.0 (32-bit)| (default, Jul 1 2013, 12:41:55) [MSC v.1500 32 bit (Intel)]'

ARF
  • 7,420
  • 8
  • 45
  • 72
  • This has a good explanation http://stackoverflow.com/questions/5997027/python-rounding-error-with-float-numbers – Niek de Klein Oct 31 '13 at 10:15
  • What happens if you don't divide: i.e. what is 3.7e-03? The trouble is first computers use binary and 10 isn't a power of two. Second they only have limited bytes, so even if something could be written as the sum of exact powers of two you might get rounding errors due to lack of bytes. – doctorlove Oct 31 '13 at 10:16
  • 1
    You might want to read [What Every Computer Scientist Should Know About Floating-Point Arithmetic](http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html). It's a generic problem that is not dependent on programming language. – Some programmer dude Oct 31 '13 at 10:17
  • @doctorlove Oddly enough 3.7e-03 stays 3.7e-03 or more precisely returns 0.0037. 3.7e-05 stays 3.7e-05. Only 3.7e-03/100 leads to the above mentioned issue. This is what threw me. – ARF Oct 31 '13 at 14:12

1 Answers1

2

This is a well-known deficiency of floating-point numbers.

You can think of binary floating-point as fractions with power-of-two denominators. Even a simple number such as 0.1 cannot be accurately represented as binary floating-point, and every division by a power of ten is by nature inaccurate.

If you need accurate division with arbitrary denominators in Python, use Decimal (which simulates pencil-and-paper decimals) or Fraction (which simulates pencil-and-paper fractions).

user4815162342
  • 141,790
  • 18
  • 296
  • 355
  • Thanks for pointing me to the Decimal type. – ARF Oct 31 '13 at 12:38
  • 1
    `Decimal` does not support arbitrary denominators. 1/3 does not have an exact decimal result. – Eric Postpischil Oct 31 '13 at 13:53
  • @EricPostpischil Thanks for the hint but that is no issue for my application. The above mentioned Fraction type appears to help in the case you mentioned. – ARF Oct 31 '13 at 14:08
  • @EricPostpischil `Decimal` supports arbitrary denominators exactly like pencil-and-paper decimal division does. So, if you divide by powers of ten, as the OP does, the result will be exact. I am willing to bet that the OP would not be surprised by `Decimal(1) / Decimal(3)` not giving an exact result. – user4815162342 Oct 31 '13 at 16:39
  • @user4815162342: Your answer states that the result of dividing by 100 shows a deficiency in (binary) floating point. Division by 3 in decimal is no different; it is, in the same sense, a deficiency in decimal arithmetic. – Eric Postpischil Oct 31 '13 at 19:01
  • @EricPostpischil Yes, "simulating pencil-and-paper decimals" pretty much implies that deficiency. – user4815162342 Nov 01 '13 at 18:00