1

I need a python function evaluates a polynomial at a set of input points.The function takes as input a vector of polynomial weights, and a vector of input points where x: a vector of input values, and w: a vector of polynomial weights (ordered so the jth element is the linear coefficient for the jth-order monomial, i.e.,x(j)). The function outputs the predictions of the polynomial at each input point.

Is there a python built in function for this?

Tomasz Bartkowiak
  • 12,154
  • 4
  • 57
  • 62
caosanity
  • 19
  • 4
  • No, not a built-in function. You probably would like to check `scikit-learn`. – Celius Stingher Oct 01 '19 at 20:35
  • @CeliusStingher please see the answer from norok2 regarding numpy.polyval() rather than scikit-learn. – James Phillips Oct 01 '19 at 21:20
  • OP asked for built-in. From python's documentation >`The library also contains built-in functions and exceptions — objects that can be used by all Python code without the need of an import statement` am I getting something wrong (definition of built-in), or is `numpy` not built-in? – Celius Stingher Oct 01 '19 at 21:23
  • @CeliusStingher I will carefully consider what you have written. – James Phillips Oct 02 '19 at 00:07

3 Answers3

2

See numpy.poly1d

Construct the polynomial x^2 + 2x + 3:

p = np.poly1d([1, 2, 3])
p(1) # prints 6
Tomasz Bartkowiak
  • 12,154
  • 4
  • 57
  • 62
1

It seems you are describing numpy.polyval():

import numpy as np

# 1 * x**2 + 2 * x**1 + 3 * x**0
# computed at points: [0, 1, 2]
y = np.polyval([1, 2, 3], [0, 1, 2])
print(y)
# [ 3  6 11]

Note that the same could be achieved with np.poly1d(), which should be more efficient if you are computing values from the same polynomial multiple times:

import numpy as np

# 1 * x**2 + 2 * x**1 + 3 * x**0
my_poly_func = np.poly1d([1, 2, 3])
# computed at points: [0, 1, 2]
y = my_poly_func([0, 1, 2])
print(y)
# [ 3  6 11]

If you want to use only Python built-ins, you could easily define a polyval() version yourself, e.g.:

def polyval(p, x):
   return [sum(p_i * x_i ** i for i, p_i in enumerate(p[::-1])) for x_i in x]


y = polyval([1, 2, 3], [0, 1, 2])
print(y)
# [3, 6, 11]

or, more efficiently:

def polyval_horner(p, x):
    y = [] 
    for x_i in x:
        y_i = 0
        for p_i in p:
            y_i = x_i * y_i + p_i
        y.append(y_i)
    return y


y = polyval_horner([1, 2, 3], [0, 1, 2])
print(y)
# [3, 6, 11]

but I would recommend using NumPy unless you have a good reason not to (for example, if your result would overflow with NumPy but not with pure Python).

norok2
  • 25,683
  • 4
  • 73
  • 99
1

For fun you could try implementing the basic algorithms, iter being the worst, horner being better (I believe Karatsuba is the best). Here are the first two:

def iterative(coefficients, x): #very inefficient
    i = len(coefficients)
    result = coefficients[i - 1]
    for z in range(i - 1, 0, -1):
        temp = x
        for y in range(z - 1):
            temp = temp * x
        result += temp * coefficients[i - z - 1]
    return (x, result)

def horner(coefficients, x): 
    result = 0
    for c in coefficients:
        result = x * result + c
    return (x, result)

I think numpy is faster in all cases as it goes into the C-level code.

neutrino_logic
  • 1,289
  • 1
  • 6
  • 11
  • `local` and `iter` are Python keyword. The inner loop from the `iter()` solution seems to be ignoring of the existence of the exponentiation [operator `**`](https://docs.python.org/3/library/operator.html#operator.pow). – norok2 Oct 02 '19 at 08:29
  • @norok2 Thanks, changed those keyword variables. Re **, these algorithms are designed to rely on low-level instructions like add and mult, while avoiding calls to more expensive functions like **, for improved speed. – neutrino_logic Oct 02 '19 at 16:00
  • I am pretty sure that replacing `**` with a `for` loop is going to be brutally slower in Python, contrarily to what you could expect with C++. – norok2 Oct 02 '19 at 16:19
  • Probably, it could be a good case for using the ctype/cython approach, and I think numpy does something like that internally. https://stackoverflow.com/questions/1942298/wrapping-a-c-library-in-python-c-cython-or-ctypes – neutrino_logic Oct 02 '19 at 17:25
  • I'd assume you will be buying some speed with Cython or even Numba, as long as you are not overflowing. But even then `**` still has quite competitive timings. Overall, not using it is not justified unless you have serious indication from a `profiler` that `**` is the bottleneck. – norok2 Oct 02 '19 at 17:46
  • I think your original answer on using numpy was the correct one, but note that numpy uses the Horner algorithm internally, I assume at the C level, so reinventing that would be a mistake (again this was just for fun): https://docs.scipy.org/doc/numpy/reference/generated/numpy.polyval.html – neutrino_logic Oct 02 '19 at 18:34
  • well, yes and no. If you need to evaluate a horribly large polynomial, NumPy will overflow, pure Python will not, as long as you have integers. – norok2 Oct 02 '19 at 18:45