0

Scipy's differential_evolution (among many other optimization routines such as minimize) have the possibility to declare a callback function to somehow halt the optimization beforehand.

On StackOverflow this callback has been mainly discussed in order to stop the optimization after a time limit (see e.g. here and here).

As both minimize and differential_evolution deal with minimization problems, my question is: does anybody know whether the callback can be used to stop the optimization if the best objective function value found so far is below a user-defined threshold?

AlessioX
  • 3,167
  • 6
  • 24
  • 40

2 Answers2

0

Given the lack of answers and comments to this question, at first I thought it was impossible to accomplish. So I opened an enhancement issue on Scipy's GitHub page and asked whether about adding this functionality.

One of the contributors closed my issue saying that, despite in a tricky manner (and not very elegant, in my opinion) this can be done and he gave me some hints.

This is the solution I've come up with, and I hope this helps

from scipy.optimize import differential_evolution
from scipy.optimize import rosen
import numpy

class MinimizeStopper(object):
    def __init__(self, f=rosen, tau=1):
        self.fun = f                     # set the objective function
        self.best_x = None
        self.best_func = numpy.inf
        self.tau = tau                   # set the user-desired threshold

    def __call__(self, xk, convergence=None,  *args, **kwds):
        fval = self.fun(xk, *args, **kwds)
        if fval < self.best_func:
            self.best_func = fval
            self.best_x = xk
        if self.best_func <= self.tau:
            print("Terminating optimization: objective function threshold triggered")
            print(self.best_x)
            return True
        else:
            return False


bounds = [(0,2), (0, 2), (0, 2), (0, 2), (0, 2)]
result = differential_evolution(rosen, bounds, callback=MinimizeStopper(), polish=False,disp=True, maxiter=100, popsize=100)
print(result)

which returns

differential_evolution step 1: f(x)= 10.7709
differential_evolution step 2: f(x)= 10.7709
differential_evolution step 3: f(x)= 8.02332
differential_evolution step 4: f(x)= 2.16592
differential_evolution step 5: f(x)= 2.16592
differential_evolution step 6: f(x)= 2.16592
differential_evolution step 7: f(x)= 0.812177
Terminating optimization: objective function threshold triggered
[1.01141374 0.95894166 0.91957732 0.87022813 0.70102066]
     fun: 0.8121773465012827
 message: 'callback function requested stop early by returning True'
    nfev: 4000
     nit: 7
 success: False
       x: array([1.01141374, 0.95894166, 0.91957732, 0.87022813, 0.70102066])

A few notes:

  1. the solution is inelegant because it requires an additional evaluation of the fitness function in order to check for the stop criterion. Unfortunately there is no workaround because of the inner structure of the scipy.optimize module
  2. I have tested this approach on rosen and generally it works if the objective function does not need any additional parameters, but if the objective function needs additional parameters, then I think one must play around with *args
  3. Printing self.best_x in the second if branch is not mandatory, of course. It was just a debugging check I added in order to see whether the best solution found by the callback is actually returned in result (i.e., the overall best solution found by differential_evolution())
AlessioX
  • 3,167
  • 6
  • 24
  • 40
0

Your solution works fine but has the downside of evaluating the function each time (inside the callback).

After some digging and based on https://github.com/scipy/scipy/issues/6878 I found a solution to avoid this. It uses a protected member from the Scipy lib so I don't how this would be not advisable.

from scipy.optimize._differentialevolution import DifferentialEvolutionSolver

function_limit = -9   

with DifferentialEvolutionSolver(func, args=(*args,), bounds=bounds, popsize=100) as solver:
    for step in solver:
        step = next(solver)  # Returns a tuple of xk and func evaluation
        func_value = step[1]  # Retrieves the func evaluation
        print(func_value)
        if solver.converged() or func_value == function_limit:
            break

x_result = solver.x

func(*args) would be the function to optimize.