1

Is there an easy way to get a solution where there is a constraint on the maximum value of a derivative of a polynomial function f(x), for a certain range of values for x?

Like was answered to this question, curve_fit from scipy.optimize can deal with constraints on the individual coefficients, like in the following example:

def func(x, a, b, c, d):
  return a + b * x + c * x ** 2 + d * x ** 3

x = np.array([0.0, 1.0, 2.0, 3.0,  4.0,  5.0])
y = np.array([0.0, 0.8, 0.9, 0.1, -0.8, -1.0])

popt_cons, _ = curve_fit(func, x, y, bounds=([-np.inf, 2, -np.inf, -np.inf], [np.inf, 2.001, np.inf, np.inf]))
print(popt_cons)
>>> [-0.14331349  2.         -0.95913556  0.10494372]

But what if I wanted the best fit polynomial where there is a constraint on for example the maximum value of the acceleration (second derivative) for a certain range of x values? That means, by integrating the function twice, that there is a constraint on the value of 2*c + 6*d*x for lets say, x between 0 to 10. Is there a method to do this, or do I have to build this from scratch?

suitendaal
  • 149
  • 9
  • I think you'll have to build it - seems to me that it's just a matter of adding constraints. Maybe look at sympy to get the derivatives automatically. – NNN Jun 15 '21 at 10:29

1 Answers1

1

The curve_fit method doesn't support additional constraints. However, you could implement a non-linear least-squares problem

min ||f(x, coeffs) - y||^2

s.t.    lb <= coeffs <= ub
     f''(x, coeffs) <= max_val for all x_lb <= x <= x_ub

with additional constraints and solve it with minimize. Here's an example of how it could be done by means of np.polyval and np.polyder:

import numpy as np
from scipy.optimize import minimize

x = np.array([0.0, 1.0, 2.0, 3.0,  4.0,  5.0])
y = np.array([0.0, 0.8, 0.9, 0.1, -0.8, -1.0])

def objective(coeffs):
    return np.linalg.norm(np.polyval(coeffs, x) - y)

def constraint(coeffs, deriv_order, x_lb, x_ub, max_val):
    deriv_coeffs = np.polyder(coeffs, deriv_order)
    # Evaluate the derivative for all x_lb <= x <= x_ub
    deriv_value = np.polyval(deriv_coeffs, x[(x >= x_lb) & (x <= x_ub)])
    return -1.0*deriv_value + max_val

# Each inequality constraint has the form fun(x) >= 0
cons = [{'type': 'ineq', 'fun': lambda coeffs: constraint(coeffs, 2, 0.0, 3.0, 20)}]
bnds = [(-np.inf, np.inf), (2, 5.001), (-np.inf, np.inf), (-np.inf, np.inf)]

poly_degree = 3
res = minimize(objective, x0=2.0*np.ones(poly_degree+1), bounds=bnds, constraints=cons)

Note that each inequality constraint has the form fun(x) >= 0, i.e. we have -f''(x, coeffs) + max_val >= 0 and we used x_lb = 0.0, x_ub = 3.0 and max_val = 20 for the second derivative. Finally, res.x contains the polynomial coefficients.

joni
  • 6,840
  • 2
  • 13
  • 20
  • Thank you! The only thing I would change is to change `deriv_value = np.polyval(deriv_coeffs, x[(x >= x_lb) & (x <= x_ub)])` to `x_eval = np.arange(x_lb,x_ub,0.1) deriv_value = np.polyval(deriv_coeffs, x_eval)` such that values for `x` outside of the known datapoints can be evaluated. – suitendaal Jun 15 '21 at 12:59