0

I have difficulty with writing the bounds of parameters in basinhopping.

(x0)=(a, b, c )

a = (0, 100)

b = (0, 0.100)

c = (0, 10)

from scipy.optimize import basinhopping

minimizer_kwargs = { "method": "Nelder-Mead" }

min = basinhopping(rmse, x0, minimizer_kwargs=minimizer_kwargs, T=0.5, stepsize=0.1, niter=200, disp=True)
Flexicoder
  • 8,251
  • 4
  • 42
  • 56
Azeezof
  • 3
  • 1
  • 5

3 Answers3

3

There are multiple approaches for this, each potentially behaving differently (common in non-convex global-optimization). The best approach always takes a-priori information about the optimization-problem into consideration!

The most robust general approach (and in my opinion the best) would be a combination of:

  • A: inner level: bound-constrained local-search
  • B: outer level: bound-constrained steps

The original author of this optimizer says, relying only on A (as done in both other answers as of now) might fail!

Code:

import numpy as np
from scipy.optimize import basinhopping


""" Example problem
    https://docs.scipy.org/doc/scipy-0.19.1/reference/generated/scipy.optimize.basinhopping.html
"""
def func2d(x):
    f = np.cos(14.5 * x[0] - 0.3) + (x[1] + 0.2) * x[1] + (x[0] + 0.2) * x[0]
    df = np.zeros(2)
    df[0] = -14.5 * np.sin(14.5 * x[0] - 0.3) + 2. * x[0] + 0.2
    df[1] = 2. * x[1] + 0.2
    return f, df

""" Example bounds """
bx0 = (-0.175, 1.)
bx1 = (-0.09, 1.)
bounds = [bx0, bx1]

""" Solve without bounds """
minimizer_kwargs = {"method":"L-BFGS-B", "jac":True}
x0 = [1.0, 1.0]
ret = basinhopping(func2d, x0, minimizer_kwargs=minimizer_kwargs, niter=200)
print(ret.message)
print("unconstrained minimum: x = [%.4f, %.4f], f(x0) = %.4f" % (ret.x[0], ret.x[1],ret.fun))


""" Custom step-function """
class RandomDisplacementBounds(object):
    """random displacement with bounds:  see: https://stackoverflow.com/a/21967888/2320035
        Modified! (dropped acceptance-rejection sampling for a more specialized approach)
    """
    def __init__(self, xmin, xmax, stepsize=0.5):
        self.xmin = xmin
        self.xmax = xmax
        self.stepsize = stepsize

    def __call__(self, x):
        """take a random step but ensure the new position is within the bounds """
        min_step = np.maximum(self.xmin - x, -self.stepsize)
        max_step = np.minimum(self.xmax - x, self.stepsize)

        random_step = np.random.uniform(low=min_step, high=max_step, size=x.shape)
        xnew = x + random_step

        return xnew

bounded_step = RandomDisplacementBounds(np.array([b[0] for b in bounds]), np.array([b[1] for b in bounds]))

""" Custom optimizer """
minimizer_kwargs = {"method":"L-BFGS-B", "jac":True, "bounds": bounds}

""" Solve with bounds """
x0 = [1.0, 1.0]
ret = basinhopping(func2d, x0, minimizer_kwargs=minimizer_kwargs, niter=200, take_step=bounded_step)
print(ret.message)
print("constrained minimum: x = [%.4f, %.4f], f(x0) = %.4f" % (ret.x[0], ret.x[1],ret.fun))

Output:

['requested number of basinhopping iterations completed successfully']
unconstrained minimum: x = [-0.1951, -0.1000], f(x0) = -1.0109
['requested number of basinhopping iterations completed successfully']
constrained minimum: x = [-0.1750, -0.0900], f(x0) = -0.9684
sascha
  • 32,238
  • 6
  • 68
  • 110
  • `accept_test` defines a test which will be used to judge whether or not to accept the **step** (text from docs), not for variables of minimized function – Sergey Nov 01 '17 at 15:39
  • And? We are now not allowing steps into infeasible solutions. The logic is applied on ```x_new```, not on some kind of step like ```x_new - x_old```. – sascha Nov 01 '17 at 15:42
  • I checked `MyBounds` for `a**2+b**2+c**2`. It doesn't prevent wrong convergence. The result is always 0,0,0 for any xmax and xmin – Sergey Nov 01 '17 at 15:50
  • 1
    Without code, optimizer-status messages and co. this is useless. Based on the function to optimize, this algorithm is a heuristic after all. – sascha Nov 01 '17 at 15:51
  • `accept_test` does not influence on local minimization and, thus, can not be used as bounds for the function to be optimized – Sergey Nov 01 '17 at 17:15
  • Yes (influence) and No (global effect). My older version did mention all these cases and explained them. You did not address it and you are not much speaking in optimization-terms (very informal) so i can't help but saying, that the original author of this optimizer [says](https://stackoverflow.com/questions/21670080/how-to-find-global-minimum-in-python-optimization-with-bounds/21967888#21967888), that your approach can fail! – sascha Nov 01 '17 at 17:28
  • I agree that the use of `accept_test` with `'bounds'` in `minimizer_kwargs` is more robust (it prevents wrong initial guess for local optimization.. didn't check, but it should be so), but `accept_test` only is not sufficient for global constrained optimization – Sergey Nov 01 '17 at 17:39
  • That solely depends on the opt-surface! And no: accept_test does not prevent wrong initial guesses as the order is local-opt first, then accept-test ( as described in earlier version). Here i'm using take_step for the same effect, but it's called before local-opt. – sascha Nov 01 '17 at 17:39
  • By default `accept-test` just accept or reject initial guess for local optimization. If you changed this order you have invented something new ) – Sergey Nov 01 '17 at 17:54
  • You are wrong (or express yourself very unclear). If accept_test would be used before local-opt, then it would be redundant using it while projecting onto bounds within LBFGS). Check the code (or trust the docs). – sascha Nov 01 '17 at 18:00
  • Moreover, if you use _local-opt first, then accept-test_ you can lose optimal constrained solution – Sergey Nov 01 '17 at 18:03
  • Sure... but that's one of the core-concepts of global-optimization (compare with simulated annealing) :-) – sascha Nov 01 '17 at 18:04
  • I don't know how to show you a code which can not work in case _local-opt first, then accept-test_ By mail? – Sergey Nov 01 '17 at 18:05
  • It does not matter. The order is given by the alg, documented within scipy and done as that in the code. Maybe we have some form of communication-problem or you are missing something. There will always be cases where one order is working while the other is not. We are talking about possibly non-convex global-optimization here. It's a heuristic without guarantees about global-convergence. It can't be as long as P/=NP and we are talking about polynomial algs. – sascha Nov 01 '17 at 18:08
  • [docs])(https://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.optimize.basinhopping.html#scipy.optimize.basinhopping) 1) random perturbation of the coordinates 2) local minimization 3) accept or reject the new coordinates based on the minimized function value. `accept_test` influences on step 1, not 3 – Sergey Nov 01 '17 at 18:12
  • And?: ```1 random perturbation of the coordinates 2 local minimization 3 accept or reject the new coordinates based on the minimized function value``` That's the order i talk about (remark: take_step is part of 1). – sascha Nov 01 '17 at 18:14
  • `take_step` and `accept_test ` are the parts of step 1. Thus you can get any point in step 2 out of your boundaries accepted in `accept_test`. This is what I'm talking about. To prevent this you should use constrained local optimization (this is what you missed in first version... as I remember) – Sergey Nov 01 '17 at 18:21
  • Please... do us a favor and [read the code](https://github.com/scipy/scipy/blob/bbd7a376be0f62b87d1f11eaa86a805033b976d4/scipy/optimize/_basinhopping.py#L95). take_step is part 1, accept_test is part of 3. You might see it different, but for me it's settled! – sascha Nov 01 '17 at 18:23
  • In general, you are right. I was confused with one exceptional example. The problem lies in `BasinHoppingRunner`. It does an initial minimization without calling `accept_test`. This initial value can go beyond boundaries mentioned in `accept_test`. As a result after that `accept_test` is always False, but it can't change result of initial local optimization. I think it is bug, because the solution is out of boundaries. But I'm right in statement that `accept_test` cannot guarantee solution inside boundaries and without constrained local optimization can lead to something far from optimum. – Sergey Nov 01 '17 at 19:14
  • everytime give me this error' bounded_step = RandomDisplacementBounds(np.array([b[0] for b in bounds]), np.array([b[1] for b in bounds]), np.array([b[2] for b in bounds])) NameError: name 'bounds' is not defined' – Azeezof Nov 07 '17 at 11:14
  • Well, the error is clear. It's probably a simple programming-error. Add a question showing the code which can be run... (i have to admit: your multiple attemps of asking here all are hard to work with; that's because of the quality of your questions... compare with other scipy-opt related questions where people create a minimal example which can be run; without changing your way, there is not much help, sry) – sascha Nov 07 '17 at 11:38
  • @ sascha, for this code give me this error(TypeError: accept_test must be callable ) – Azeezof Nov 08 '17 at 14:03
  • The code above runs. If you changed something it might not of course. And the error is pretty clear what's wrong. accept_test is not a function (and accept_test is not even within the code above). – sascha Nov 08 '17 at 14:05
  • @sacha, Thank you so much for your answer, i would like to ask you ? can i change the method (L-BFGS-B) to other methods such (Nealder-Mead)? – Azeezof Nov 09 '17 at 10:01
  • @Azeezof NM has no support for bounds, meaning: bad things could happen (as one component of the two is not playing by the rules according to the approach). I don't recommend it. – sascha Nov 09 '17 at 13:00
  • @Sacha, idon't know why this error , hava you any idea about that please bounded_step = Bounds(np.array([b[0] for b in bounds]), np.array([b[1] for b in bounds]), np.array([b[2] for b in bounds])) IndexError: list index out of range – Azeezof Nov 10 '17 at 13:20
  • @sascha. This is the only solution that actually takes of the situation when the step taking runs out of bounds. I don't think the other person actually tries to understand the optimization's author's take on this issue or cares to. I do have one question about the single-variable optimization. The error messages keeps telling me the length of x0 != length of bounds. How would you set up the arrays input for x0, xmin, xmax and bounds, and properly instantiate aRandomDisplacementBounds object? I tried all kinds of shape with numpy, and none of them works. Thank you! – Chen Lizi May 20 '21 at 16:36
2

You should use "bounds" parameter in minimizer_kwargs which is passing to scipy.optimize.minimize(). Here is an example:

bnds = ((1, 100), (1, 100), (1,100))# your bounds

def rmse(X):# your function
    a,b,c = X[0],X[1],X[2]
    return a**2+b**2+c**2

x0 = [10., 10., 10.]
minimizer_kwargs = { "method": "L-BFGS-B","bounds":bnds }
ret = basinhopping(rmse, x0, minimizer_kwargs=minimizer_kwargs,niter=10)
print("global minimum: a = %.4f, b = %.4f c = %.4f | f(x0) = %.4f" % (ret.x[0], ret.x[1], ret.x[2], ret.fun))

The result is

global minimum: a = 1.0000, b = 1.0000 c = 1.0000 | f(x0) = 3.0000

Without boundaries it is (clear) 0,0,0

Sergey
  • 487
  • 3
  • 7
  • Thank you so much, i would like to ask you ? if i write like that ? – Azeezof Nov 01 '17 at 15:41
  • def opt(x0): bd0 = (0,100) bd1 = (0,0.100) bd2 = (2,8) bnds = (bd0, bd1, bd2) minimizer_kwargs = {"method":"L-BFGS-B", "bounds":bnds} res = basinhopping(rmse, x0, minimizer_kwargs=minimizer_kwargs,T=0.5, stepsize=0.1, niter=200, disp=True) print(res.x) return res.x – Azeezof Nov 01 '17 at 15:43
  • Theoretically it should work, but I can not guarantee this without seeing the function – Sergey Nov 01 '17 at 15:47
  • the objective function is – Azeezof Nov 06 '17 at 14:43
  • err = np.sqrt(np.sum((ref[:,1] - mod[:,1])**2) / len(ref)) – Azeezof Nov 06 '17 at 14:43
  • Before putting `mod[:,1]` to a function you should generate `mod` as a result of model output. It should be like this `mod=model(a,b,c)` – Sergey Nov 07 '17 at 15:46
1

I know this question is old (the accepted answer is from about 5 years ago) and I wish I could comment to it rather than provide a new "answer" (I lack the reputation to do that, unfortunately), but I would like to ask for some clarification on the accepted answer provided by @sascha on Nov 1, 2017.

Specifically, by looking at the code of the function __call__ of class RandomDisplacementBounds(object), I don't understand why x_new is defined as xnew = x + random_step rather than simply as xnew = random_step.

Probably, I am missing out on something and/or misunderstanding the stated problem and context. My reasoning is that if xmin and xmax passed to the function __init__ are indeed the lower and upper bounds of the vector x and random_step is a point randomly drawn from a uniform distribution between xmin and xmax, then what is called random_step is indeed the new point for the local optimization routine. Isn't this true? I would greatly appreciate any help with this.

AB8
  • 51
  • 6