0

I have an optimization problem that I'm implementing using SLSQP in scipy.minimize. My objective function f(x) is a series of Python functions that combines standard arithmetic plus conditionals, loops, and more complex mathematical operations.

Currently I'm using finite difference to find the gradient , but as the size of the decision vector grows (~4000), this becomes incredibly slow. Given the complexity of f(x), an analytic solution might be infeasible. Instead I'm considering approximating the function through some polynomial, then finding the analytic solution to that function instead.

  1. Is this a good solution to my problem? Accuracy is not so important that I can't tolerate a few % error. If so, how should I go about doing this? I have a general idea of how to approximate functions, but am relatively inexperienced in practice.

  2. Are there any other directions I should look into if this isn't a good idea?

halfcup
  • 31
  • 3
  • Polynomial fitting is good, with a couple of constraints. Is the feasible set bounded? If not, interpolating a crazy function like that over all of the reals is probably not a good idea. Do you know how "crazy" your function is? How many interpolating points might you need to get an accurate estimate? There are some good methods using [Chebyshev estrema](https://en.wikipedia.org/wiki/Chebyshev_polynomials#Roots_and_extrema), which I can help with if you like. – Kyle May 25 '18 at 02:50
  • 1
    You may try Pyomo. That will give you automatic differentiation and access to large scale sparse solvers. – Erwin Kalvelagen May 25 '18 at 11:34
  • 1
    You may also try to ask this question on Computational Science SE with a mention of this post. https://scicomp.stackexchange.com/ – Anton Menshov May 25 '18 at 12:24
  • @KyleRoth, thanks! That sounds pretty good. each element in the input vector is bounded from 0 to 4, so it should make things easier. My other issue is that the input vector can be of any size, so I was also wondering if a function could be approximated for any length vector – halfcup May 26 '18 at 17:59
  • 2D Chebyshev interpolation is discussed [here](https://math.stackexchange.com/q/778471/108876). I'm sure SciPy has a package for Chebyshev interpolation that supports multi-dimensional domains. – Kyle May 26 '18 at 20:48
  • I am just going to note here about using Chebyshev interpolation. It should only be better of the spacing of the points. You're concerned about the speed. There are implementations using Chebyshev reprojections using Fourier series methods but I don't think they are in the SciPy package. I have only seen them in literature. –  May 26 '18 at 22:35
  • Yes that would a a good idea. But you can also get about a factor of 10 to 100 by using a low-level-callable (repeatedly calling a Python functionis terribly slow). eg. https://stackoverflow.com/questions/49683653/how-to-pass-additional-parameters-to-numba-cfunc-passed-as-lowlevelcallable-to-s/49732825#49732825 or https://stackoverflow.com/a/50097776/4045774 – max9111 May 28 '18 at 11:37

0 Answers0