5

I come upon the following optimization problem:

The target function is a multivariate and non-differentiable function which takes as argument a list of scalars and return a scalar. It is non-differentiable in the sense that the computation within the function is based on pandas and a series of rolling, std, etc. actions.

The pseudo code is below:

def target_function(x: list) -> float:
    # calculations
    return output

Besides, each component of the x argument has its own bounds defined as a tuple (min, max). So how should I use the scipy.optimize library to find the global minimum of this function? Any other libraries could help?

I already tried scipy.optimize.brute, which took me like forever and scipy.optimize.minimize, which never produced a seemingly correct answer.

Nathan
  • 174
  • 2
  • 7
  • Have you tried [basinhopping](https://docs.scipy.org/doc/scipy-1.1.0/reference/generated/scipy.optimize.basinhopping.html#scipy.optimize.basinhopping) or [differential_evolution](https://docs.scipy.org/doc/scipy-1.1.0/reference/generated/scipy.optimize.differential_evolution.html#scipy.optimize.differential_evolution)? – joni Sep 13 '18 at 07:14
  • I tried. basinhopping does not accept boundary conditions and differential_evolution also took like forever. :( – Nathan Sep 13 '18 at 10:02

1 Answers1

3

basinhopping, brute, and differential_evolution are the methods available for global optimization. As you've already discovered, brute-force global optimization is not going to be particularly efficient.

Differential evolution is a stochastic method that should do better than brute-force, but may still require a large number of objective function evaluations. If you want to use it, you should play with the parameters and see what will work best for your problem. This tends to work better than other methods if you know that your objective function is not "smooth": there could be discontinuities in the function or its derivatives.

Basin-hopping, on the other hand, makes stochastic jumps but also uses local relaxation after each jump. This is useful if your objective function has many local minima, but due to the local relaxation used, the function should be smooth. If you can't easily get at the gradient of your function, you could still try basin-hopping with one of the local minimizers which doesn't require this information.

The advantage of the scipy.optimize.basinhopping routine is that it is very customizable. You can use take_step to define a custom random jump, accept_test to override the test used for deciding whether to proceed with or discard the results of a random jump and relaxation, and minimizer_kwargs to adjust the local minimization behavior. For example, you might override take_step to stay within your bounds, and then select perhaps the L-BFGS-B minimizer, which can numerically estimate your function's gradient as well as take bounds on the parameters. L-BFGS-B does work better if you give it a gradient, but I've used it without one and it still is able to minimize well. Be sure to read about all of the parameters on the local and global optimization routines and adjust things like tolerances as acceptable to improve performance.

EE_
  • 126
  • 5