I have an optimization problem that I'm implementing using SLSQP in scipy.minimize. My objective function f(x) is a series of Python functions that combines standard arithmetic plus conditionals, loops, and more complex mathematical operations.
Currently I'm using finite difference to find the gradient , but as the size of the decision vector grows (~4000), this becomes incredibly slow. Given the complexity of f(x), an analytic solution might be infeasible. Instead I'm considering approximating the function through some polynomial, then finding the analytic solution to that function instead.
Is this a good solution to my problem? Accuracy is not so important that I can't tolerate a few % error. If so, how should I go about doing this? I have a general idea of how to approximate functions, but am relatively inexperienced in practice.
Are there any other directions I should look into if this isn't a good idea?