That's because int(a / b)
isn't the same as a // b
.
int(a / b)
evaluates a / b
first, which is floating-point division. And floating-point numbers are prone to inaccuracies, roundoff errors and the like, as .1 + .2 == 0.30000000000000004
. So, at some point, your code attempts to divide really big numbers, which causes roundoff errors since floating-point numbers are of fixed size, and thus cannot be infinitely precise.
a // b
is integer division, which is a different thing. Python's integers can be arbitrarily huge, and their division doesn't cause roundoff errors, so you get the correct result.
Speaking about floating-point numbers being of fixed size. Take a look at this:
>>> import math
>>> f = math.factorial
>>> f(20) * f(80-20)
20244146256600469630315959326642192021057078172611285900283370710785170642770591744000000000000000000
>>> f(80) / _
3.5353161422121743e+18
The number 3.5353161422121743e+18
is represented exactly as shown here: there is no information about the digits after the last 3
in 53...43
because there's nowhere to store it. But int(3.5353161422121743e+18)
must put something there! Yet it doesn't have enough information. So it puts whatever it wants to so that float(int(3.5353161422121743e+18)) == 3.5353161422121743e+18
.