70

Is there a difference in the results returned by Python's built-in pow(x, y) (no third argument) and the values returned by math.pow(), in the case of two float arguments.

I am asking this question because the documentation for math.pow() implies that pow(x, y) (i.e. x**y) is essentially the same as math.pow(x, y):

math.pow(x, y)

Return x raised to the power y. Exceptional cases follow Annex ‘F’ of the C99 standard as far as possible. In particular, pow(1.0, x) and pow(x, 0.0) always return 1.0, even when x is a zero or a NaN. If both x and y are finite, x is negative, and y is not an integer then pow(x, y) is undefined, and raises ValueError.

Changed in version 2.6: The outcome of 1**nan and nan**0 was undefined.

Note the last line: the documentation implies that the behavior of math.pow() is that of the exponentiation operator ** (and therefore of pow(x, y)). Is this officially guaranteed?

Background: My goal is to provide an implementation of both the built-in pow() and of math.pow() for numbers with uncertainty that behaves in the same way as with regular Python floats (same numerical results, same exceptions, same results for corner cases, etc.). I have already implemented something that works quite well, but there are some corner cases that need to be handled.

Community
  • 1
  • 1
Eric O. Lebigot
  • 91,433
  • 48
  • 218
  • 260
  • 1
    Probably nothing you don't already know, but here's some folks messing around with it: http://www.velocityreviews.com/forums/t355507-pow-power-function.html – yurisich Apr 23 '12 at 14:50

4 Answers4

61

Quick Check

From the signatures, we can tell that they are different:

pow(x, y[, z])

math.pow(x, y)

Also, trying it in the shell will give you a quick idea:

>>> pow is math.pow
False

Testing the differences

Another way to understand the differences in behaviour between the two functions is to test for them:

import math
import traceback
import sys

inf = float("inf")
NaN = float("nan")

vals = [inf, NaN, 0.0, 1.0, 2.2, -1.0, -0.0, -2.2, -inf, 1, 0, 2]

tests = set([])

for vala in vals:
  for valb in vals:
    tests.add( (vala, valb) )
    tests.add( (valb, vala) )


for a,b in tests:
  print("math.pow(%f,%f)"%(a,b) )
  try:
    print("    %f "%math.pow(a,b))
  except:
    traceback.print_exc()
  
  print("__builtins__.pow(%f,%f)"%(a,b) )
  try:
    print("    %f "%__builtins__.pow(a,b))
  except:
    traceback.print_exc()

We can then notice some subtle differences. For example:

math.pow(0.000000,-2.200000)
    ValueError: math domain error

__builtins__.pow(0.000000,-2.200000)
    ZeroDivisionError: 0.0 cannot be raised to a negative power

There are other differences, and the test list above is not complete (no long numbers, no complex, etc...), but this will give us a pragmatic list of how the two functions behave differently. I would also recommend extending the above test to check for the type that each function returns. You could probably write something similar that creates a report of the differences between the two functions.

math.pow()

math.pow() handles its arguments very differently from the builtin ** or pow(). This comes at the cost of flexibility. Having a look at the source, we can see that the arguments to math.pow() are cast directly to doubles:

static PyObject *
math_pow(PyObject *self, PyObject *args)
{
    PyObject *ox, *oy;
    double r, x, y;
    int odd_y;

    if (! PyArg_UnpackTuple(args, "pow", 2, 2, &ox, &oy))
        return NULL;
    x = PyFloat_AsDouble(ox);
    y = PyFloat_AsDouble(oy);
/*...*/

The checks are then carried out against the doubles for validity, and then the result is passed to the underlying C math library.

builtin pow()

The built-in pow() (same as the ** operator) on the other hand behaves very differently, it actually uses the Objects's own implementation of the ** operator, which can be overridden by the end user if need be by replacing a number's __pow__(), __rpow__() or __ipow__(), method.

For built-in types, it is instructive to study the difference between the power function implemented for two numeric types, for example, floats, long and complex.

Overriding the default behaviour

Emulating numeric types is described here. essentially, if you are creating a new type for numbers with uncertainty, what you will have to do is provide the __pow__(), __rpow__() and possibly __ipow__() methods for your type. This will allow your numbers to be used with the operator:

class Uncertain:
  def __init__(self, x, delta=0):
    self.delta = delta
    self.x = x
  def __pow__(self, other):
    return Uncertain(
      self.x**other.x, 
      Uncertain._propagate_power(self, other)
    )
  @staticmethod
  def _propagate_power(A, B):
    return math.sqrt(
      ((B.x*(A.x**(B.x-1)))**2)*A.delta*A.delta +
      (((A.x**B.x)*math.log(B.x))**2)*B.delta*B.delta
    )

In order to override math.pow() you will have to monkey patch it to support your new type:

def new_pow(a,b):
    _a = Uncertain(a)
    _b = Uncertain(b)
    return _a ** _b

math.pow = new_pow

Note that for this to work you'll have to wrangle the Uncertain class to cope with an Uncertain instance as an input to __init__()

Don Hatch
  • 5,041
  • 3
  • 31
  • 48
brice
  • 24,329
  • 7
  • 79
  • 95
  • This is the kind of answer I was looking for: some hard facts about the difference between `pow(x, y)` and `math.pow(x, y)` for real numbers. However, there is one missing piece: do you know how `__pow()__` is implemented for real numbers? the big question is: does it use the underlying C math library like `math.pow()` does, and does it handle exceptional cases in the same way (including the exceptions it raises)? I had a quick look at the source of `pow(x, y)` at http://hg.python.org/cpython/file/6a60359556f9/Objects/floatobject.c#l807, but have not yet been able to conclude. – Eric O. Lebigot Apr 24 '12 at 07:18
  • 1
    `__pow__` does end up using the underlying math library. It's [in the file you linked](http://hg.python.org/cpython/file/6a60359556f9/Objects/floatobject.c#l911) on line 911: `ix = pow(iv, iw);`. Considering what you're trying to do, I would actually set up a test suite and run it against the builting `pow()` and `math.pow()` in order to understand their behaviour. I've added a script that will run a battery of tests for this purpose. – brice Apr 24 '12 at 08:49
  • 1
    BTW, I had a look at [Uncertainty](http://packages.python.org/uncertainties/), and it looks really useful! Also, I'm wondering, do you take into account the underlying precision when doing your uncertainty calculations? – brice Apr 24 '12 at 09:24
  • Thank you for your nice comment. If by "underlying precision" you mean the float precision, then the answer is "sometimes": most uncertainty calculations are done analytically (the formulas are of course evaluated numerically). However, users can also require numerical uncertainty calculations for, say, Fortran functions that cannot accept number with uncertainties; in this case, I do take into account the precision of floating-point numbers (when calculating numerical derivatives). – Eric O. Lebigot Apr 24 '12 at 10:24
38

math.pow() implicitly converts its arguments to float:

>>> from decimal import Decimal
>>> from fractions import Fraction
>>> math.pow(Fraction(1, 3), 2)
0.1111111111111111
>>> math.pow(Decimal(10), -1)
0.1

but the built-in pow does not:

>>> pow(Fraction(1, 3), 2)
Fraction(1, 9)
>>> pow(Decimal(10), -1)
Decimal('0.1')

My goal is to provide an implementation of both the built-in pow() and of math.pow() for numbers with uncertainty

You can overload pow and ** by defining __pow__ and __rpow__ methods for your class.

However, you can't overload math.pow (without hacks like math.pow = pow). You can make a class usable with math.pow by defining a __float__ conversion, but then you'll lose the uncertainty attached to your numbers.

cardamom
  • 6,873
  • 11
  • 48
  • 102
dan04
  • 87,747
  • 23
  • 163
  • 198
  • 3
    Good answer. FWIW, in the CPython source code, the [built-in pow function](http://hg.python.org/cpython/file/c7163a7f7cd2/Python/bltinmodule.c#l1505) calls `PyNumber_Power`, which calls `__pow__` on the object in question. For example, see the [source for float_pow](http://hg.python.org/cpython/file/9a84cab7ff9d/Objects/floatobject.c#l637). In comparison, the [source for math.pow](http://hg.python.org/cpython/file/9a84cab7ff9d/Modules/mathmodule.c#l1780) does indeed convert its arguments to floats first thing. – Ben Hoyt Apr 23 '12 at 15:28
  • 5
    You might want to add in that `math.pow` is only there because the `math` module reflects the standard C's math library. – orlp Apr 23 '12 at 23:55
  • @dan04: Are you implying that besides float conversion the two functions have exactly the same behavior (numerical results, but also exceptions, corner case values, etc.)? This is really what the question is about. :) – Eric O. Lebigot Apr 24 '12 at 07:21
  • @dan04: PS: brice's answer shows that `math.pow()` and the built-in `pow()` do behave differently with two float arguments. – Eric O. Lebigot Apr 24 '12 at 11:58
12

Python's standard pow includes a simple hack that makes pow(2, 3, 2) faster than (2 ** 3) % 2 (of course, you'll only notice that with large numbers).

Another big difference is how the two functions handle different input formats.

>>> pow(2, 1+0.5j)
(1.8810842093664877+0.679354250205337j)
>>> math.pow(2, 1+0.5j)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: can't convert complex to float

However, I have no idea why anyone would prefer math.pow over pow.

Eric O. Lebigot
  • 91,433
  • 48
  • 218
  • 260
Tom van der Woerdt
  • 29,532
  • 7
  • 72
  • 105
  • 1
    Why do you call the optimization of a special case a *simple hack*? – Wolf Feb 16 '15 at 10:19
  • @Wolf Because that's what optimizations for special cases are, imho. There's certainly value in them, but they're simple and only address a single use case, making them hacks. – Tom van der Woerdt Feb 16 '15 at 23:01
  • @TomvanderWoerdt Are you addressing the design flaw with unreasonable function overloading? In that case I agree: A name like `powmod` would fit much better. – Wolf Feb 17 '15 at 09:04
1

Just adding %timeit comparison

In [1]: def pair_generator(): 
    ...:     yield (random.random()*10, random.random()*10) 
    ...:   

In [2]: %timeit [a**b for a, b in pair_generator()]                                                                    
538 ns ± 1.94 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)

In [3]: %timeit [math.pow(a, b) for a, b in pair_generator()]                                                          
632 ns ± 2.77 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
zhukovgreen
  • 1,551
  • 16
  • 26
  • 1
    The conclusion is actually not obvious from these results: in fact, in Python 3.6.2, if you check the Python bytecode for for `5.2**2.2`, you see that the interpreter calculates it and replaces it by a constant (`import dis; dis.dis(5.2**2.2)`). The timing result for `5.2**2.2` therefore does not include the calculation time! – Eric O. Lebigot Jul 31 '19 at 10:48
  • 1
    @EricOLebigot thanks for pointing this out. I edited the post – zhukovgreen Aug 01 '19 at 05:42