2

My goal is to optimize least squares ofth degree polynomial functios with some constraints, so my goal is to use scipy.optimize.minimize(...., method = 'SLSQP', ....). In optimalization, it is always good to pass Jacobian in the method. I am not sure, however, how to desing my 'jac' function.

My objective function is desinged like this:

def least_squares(args_pol, x, y):
    a, b, c, d, e = args_pol
    return ((y-(a*x**4 + b*x**3 + c*x**2 + d*x + e))**2).sum() 

where x and y are numpy arrays and contains the coordinates of points. I found in documentation, that 'jacobian' of scipy.ompitmize.minimize is gradient ob objective function and thus its array of first derivatives.

for args_pol its easy to find first derivatives, for example

db = (2*(a*x**4 + b*x**3 + c*x**2 + d*x + e - y)*x**3).sum()

but for each [x_i] in my numpy.array x is derivative

dx_i = 2*(a*x[i]**4 + b*x[i]**3 + c*x[i]**2 + d*x[i] + e - y[i])*
       (4*a*x[i]**3 + 3*b*x[i]**2 + 2*c*x[i] + d)

and so on for each y_i. Thus, reasonable way is to compute each derivative as numpy.array dx and dy.

My question is - what form of result should my function for gradient return? For example should it look like

return np.array([[da, db, dc, dd, de], [dx[1], dx[2], .... dx[len(x)-1]], 
                 [dy[1], dy[2],..........dy[len(y)-1]]])

or should it look like

return np.array([da, db, dc, dd, de, dx, dy])

Thanks for any explanations.

Bobesh
  • 1,157
  • 2
  • 15
  • 30
  • I assume `x` and `y` are constant parameters and you only optimize the parameters in `args_pol` (otherwise you'd have a problem with the cost function). In this case the Jacobian should contain only `[da, db, dc, dd, de]`. – MB-F Nov 29 '17 at 12:41

0 Answers0