33

I imagine this is a classic floating point precision question, but I am trying to wrap my head around this result, running 1//0.01 in Python 3.7.5 yields 99.

I imagine it is an expected result, but is there any way to decide when it is safer to use int(1/f) rather than 1//f ?

Mike Doe
  • 16,349
  • 11
  • 65
  • 88
Albert James Teddy
  • 1,336
  • 1
  • 13
  • 24
  • 4
    Yes, it is always safer int(1/f). Simply because // is the FLOOR division, and you wrongly think of it as ROUND. – Perdi Estaquel Feb 21 '20 at 02:38
  • 4
    Possible duplicate of [Is floating point math broken?](https://stackoverflow.com/questions/588004/is-floating-point-math-broken) – pppery Feb 21 '20 at 04:51
  • 2
    Not a duplicate. This can work as expected 99.99% by always using `round()` and never `//` or `int()`. The linked question is about float comparison has nothing to do with truncation and no such easy fix. – maxy Mar 01 '20 at 13:39

5 Answers5

25

If this were division with real numbers, 1//0.01 would be exactly 100. Since they are floating-point approximations, though, 0.01 is slightly larger than 1/100, meaning the quotient is slightly smaller than 100. It's this 99.something value that is then floored to 99.

chepner
  • 497,756
  • 71
  • 530
  • 681
  • 3
    This doesn't address the "is there a way to decide when it is safer" part. – Scott Hunter Feb 20 '20 at 15:36
  • 11
    "Safer" isn't well-defined. – chepner Feb 20 '20 at 15:37
  • 1
    Enough to completely ignore it, esp. when the OP is aware of floating point issues? – Scott Hunter Feb 20 '20 at 15:38
  • 3
    @chepner If "safer" isn't well defined, then perhaps it's better to ask for clarification :/ –  Feb 21 '20 at 07:07
  • 2
    it's pretty clear to me that "safer" means "error not worse than a cheap pocket calculator" – maxy Mar 01 '20 at 13:55
  • @maxy - If you want to emulate the behavior of a cheap pocket calculator, you need to use decimal arithmetic (not binary floating point); see https://docs.python.org/3.7/library/decimal.html. Please also see https://stackoverflow.com/questions/588004/is-floating-point-math-broken – Stephen C Mar 01 '20 at 16:44
  • @ScottHunter - While the OP is aware of the problem, it seems that he doesn't understand it. Besides, there is no "safer" way, if you are going to use binary floating point types. The errors are inherent, and you just need to deal with them. (And the same is also true for decimal types; e.g. consider what happens when you compute `(1.0 / 3.0) * 3.0` using a decimal type.) – Stephen C Mar 01 '20 at 16:51
  • @StephenC A pocket calculator will also show that `(sqrt(2) + 0.1 - 0.1)^2` equals `2`. This is not just floating-point luck (as when using `//` in Python), but the result of doing rounding instead of truncation before diplay. – maxy Mar 02 '20 at 07:50
10

The reasons for this outcome are like you state, and are explained in Is floating point math broken? and many other similar Q&A.

When you know the number of decimals of numerator and denominator, a more reliable way is to multiply those numbers first so they can treated as integers, and then perform integer division on them:

So in your case 1//0.01 should be converted first to 1*100//(0.01*100) which is 100.

In more extreme cases you can still get "unexpected" results. It might be necessary to add a round call to numerator and denominator before performing the integer division:

1 * 100000000000 // round(0.00000000001 * 100000000000)

But, if this is about working with fixed decimals (money, cents), then consider working with cents as unit, so that all arithmetic can be done as integer arithmetic, and only convert to/from the main monetary unit (dollar) when doing I/O.

Or alternatively, use a library for decimals, like decimal, which:

...provides support for fast correctly-rounded decimal floating point arithmetic.

from decimal import Decimal
cent = Decimal(1) / Decimal(100) # Contrary to floating point, this is exactly 0.01
print (Decimal(1) // cent) # 100
trincot
  • 317,000
  • 35
  • 244
  • 286
  • 3
    "which obviously is 100." Not necessarily: if the .01 isn't exact, then .01 * 100 isn't as well. It must be "tuned" manually. – glglgl Feb 20 '20 at 15:38
9

What you have to take into account is that // is the floor operator and as such you should first think as if you have equal probability to fall in 100 as in 99 (*) (because the operation will be 100 ± epsilon with epsilon>0 provided that the chances of getting exactly 100.00..0 are extremely low.)

You can actually see the same with a minus sign,

>>> 1//.01
99.0
>>> -1//.01
-100.0

and you should be as (un)surprised.

On the other hand, int(-1/.01) performs first the division and then applies the int() in the number, which is not floor but a truncation towards 0! meaning that in that case,

>>> 1/.01
100.0
>>> -1/.01
-100.0

hence,

>>> int(1/.01)
100
>>> int(-1/.01)
-100

Rounding though, would give you the YOUR expected result for this operator because again, the error is small for those figures.

(*)I am not saying that the probability is the same, I am just saying that a priori when you perform such a computation with floating arithmetic that is an estimate of what you are getting.

myradio
  • 1,703
  • 1
  • 15
  • 25
7

Floating point numbers can't represent most decimal numbers exactly, so when you type a floating point literal you actually get an approximation of that literal. The approximation may be larger or smaller than the number you typed.

You can see the exact value of a floating point number by casting it to Decimal or Fraction.

>>> from decimal import Decimal
>>> Decimal(0.01)
Decimal('0.01000000000000000020816681711721685132943093776702880859375')
>>> from fractions import Fractio
>>> Fraction(0.01)
Fraction(5764607523034235, 576460752303423488) 

We can use the Fraction type to find the error caused by our inexact literal.

>>> float((Fraction(1)/Fraction(0.01)) - 100)
-2.0816681711721685e-15

We can also find out how granular double precision floating point numbers around 100 are by using nextafter from numpy.

>>> from numpy import nextafter
>>> nextafter(100,0)-100
-1.4210854715202004e-14

From this we can surmise that the nearest floating point number to 1/0.01000000000000000020816681711721685132943093776702880859375 is in-fact exactly 100.

The difference between 1//0.01 and int(1/0.01) is the rounding. 1//0.01 rounds the exact result down to the next whole number in a single step. So we get a result of 99.

int(1/0.01) on the other hand rounds in two stages, first it rounds the result to the nearest double precision floating point number (which is exactly 100), then it rounds that floating point number down to the next integer (which is again exactly 100).

plugwash
  • 9,724
  • 2
  • 38
  • 51
  • Calling this just *rounding* is misleading. It should be called either *truncation* or *rounding towards zero*: `int(0.9) == 0` and `int(-0.9) == 0` – maxy Mar 01 '20 at 13:15
  • It is **binary** floating point types you are talking about here. (There are also decimal floating point types too.) – Stephen C Mar 01 '20 at 16:52
3

If you execute the following

from decimal import *

num = Decimal(1) / Decimal(0.01)
print(num)

The output will be:

99.99999999999999791833182883

This is how it's internally represented, so rounding it down // will give 99

Rain
  • 3,416
  • 3
  • 24
  • 40
  • 2
    It's accurate enough to show the error in this case, but be aware that "Decimal" arithmetic is not exact either. – plugwash Feb 21 '20 at 00:48
  • With `Decimal(0.01)` you are too late, the error already crept in before you call `Decimal`. I am not sure how this is an answer to the question... You must first calculate a precise 0.01 with `Decimal(1) / Decimal(100)`, like I showed in my answer. – trincot Feb 21 '20 at 08:15