Possible Duplicate:
Inaccurate Logarithm in Python
Why are the math.log10(x)
and math.log(x,10)
results different?
In [1]: from math import *
In [2]: log10(1000)
Out[2]: 3.0
In [3]: log(1000,10)
Out[3]: 2.9999999999999996
Possible Duplicate:
Inaccurate Logarithm in Python
Why are the math.log10(x)
and math.log(x,10)
results different?
In [1]: from math import *
In [2]: log10(1000)
Out[2]: 3.0
In [3]: log(1000,10)
Out[3]: 2.9999999999999996
It's a known bug : http://bugs.python.org/issue3724
Seems logX(y) is always more precise than the equivalent log(Y, X).
math.log10
and math.log(x, 10)
are using different algorithm, and the former is usually more accurate. Actually, it's a known issue(Issue6765): math.log, log10 inconsistency.
One may think in this way: log10(x)
has a fixed base, hence it can be computed directly by some mathematical approximation formula(e.g. Taylor series), while log(x, 10)
comes from a more general formula with two variables, which may be indirectly calculated by log(x) / log(10)
(at least the precision of log(10) will affect the precision of quotient). So it's natural that the former way is both faster and more accurate, and that is reasonable considering that it takes advantage of a pre-known logarithmic base(i.e. 10).
As others have pointed out, log(1000, 10)
is computed internally as log(1000) / log(10)
. This can be verified empirically:
In [3]: math.log(1000, 10) == math.log(1000) / math.log(10)
Out[3]: True
In [4]: math.log10(1000) == math.log(1000) / math.log(10)
Out[4]: False
The results of neither log(1000)
nor log(10)
can be represented as float
, so the final result is also inexact.