44

I'm trying to use scipy.optimize functions to find a global minimum of a complicated function with several arguments. scipy.optimize.minimize seems to do the job best of all, namely, the 'Nelder-Mead' method. However, it tends to go to the areas out of arguments' domain (to assign negative values to arguments that can only be positive) and thus returns an error in such cases. Is there a way to restrict the arguments' bounds within the scipy.optimize.minimize function itself? Or maybe within other scipy.optimize functions?

I've found the following advice:

When the parameters fall out of the admissible range, return a wildly huge number (far from the data to be fitted). This will (hopefully) penalize this choice of parameters so much that curve_fit will settle on some other admissible set of parameters as optimal.

given in this previous answer, but the procedure will take a lot of computational time in my case.

Community
  • 1
  • 1
  • 1
    Making the cost function return a large cost when the inputs are out of the permissible range is a very bad idea because the search function will spend most of its energy searching the infinitely sized space of ineligible answers. Use the `constraint` argument from `scipy.minimize` to specify a method that will tell the algorithm where to confine its search. search for 'constraints' here: http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html#scipy.optimize.minimize – Eric Leschinski Sep 06 '16 at 00:21

4 Answers4

49

The minimize function has a bounds parameter which can be used to restrict the bounds for each variable when using the L-BFGS-B, TNC, COBYLA or SLSQP methods.

For example,

import scipy.optimize as optimize

fun = lambda x: (x[0] - 1)**2 + (x[1] - 2.5)**2
res = optimize.minimize(fun, (2, 0), method='TNC', tol=1e-10)
print(res.x)
# [ 1.          2.49999999]

bnds = ((0.25, 0.75), (0, 2.0))
res = optimize.minimize(fun, (2, 0), method='TNC', bounds=bnds, tol=1e-10)
print(res.x)
# [ 0.75  2.  ]
unutbu
  • 842,883
  • 184
  • 1,785
  • 1,677
  • I'm trying to do this with an optimization Algo, and figured bounds was where to limit it, but unfortunately my results still show 0 even when I impose a (.1,1) or (.1,.5) bound. Any remedies here? I was doing this with a list comp, so [(.1,.9) for I in range(15)] as my bnd and still getting 1.0 for 1, and 0 for the other 14 in some cases. – Zach Oakes Sep 01 '19 at 00:50
  • DUH -- .1 doesn't go into 1.0 15x -- whoops! Sorry, was using for a 5 security portfolio and scaled it and didn't think to change min. Problem solved – Zach Oakes Sep 01 '19 at 00:56
34

The Nelder-Mead solver doesn't support constrained optimization, but there are several others that do.

TNC and L-BFGS-B both support only bound constraints (e.g. x[0] >= 0), which should be fine for your case. COBYLA and SLSQP are more flexible, supporting any combination of bounds, equality and inequality-based constraints.

You can find more detailed info about the solvers by looking at the docs for the standalone functions, e.g. scipy.optimize.fmin_slsqp for method='SLSQP'.

You can see my previous answer here for an example of constrained optimization using SLSQP.

Community
  • 1
  • 1
ali_m
  • 71,714
  • 23
  • 223
  • 298
13

The argument you are looking for is: constraints which is one of the arguments passed to scipy.minimize. Roll your own lambda function that receives the parameters to constrain like this:

#A function to define the space where scipy.minimize should 
#confine its search:
def apply_sum_constraint(inputs):
    #return value must come back as 0 to be accepted
    #if return value is anything other than 0 it's rejected
    #as not a valid answer.
    total = 50.0 - np.sum(inputs)
    return total

my_constraints = ({'type': 'eq', "fun": apply_sum_constraint })
result = spo.minimize(f, 
                      guess, 
                      method='SLSQP', 
                      args=(a, b, c),
                      bounds=((-1.0, 1.0), (-1.0, 1.0)),
                      options={'disp': True},
                      constraints=my_constraints)

The above example asserts that all the new candidates in the neighborhood of the last searched item better add up to 50. Change that method to define the permissible search space and the scipy.minimize function will waste no energy considering those answers.

Eric Leschinski
  • 146,994
  • 96
  • 417
  • 335
3

I know this is late in the game, but maybe have a look at mystic. You can apply arbitrary python functions as penalty functions, or apply bounds constraints, and more… on any optimizer (including the algorithm from scipy.optimize.fmin).

https://github.com/uqfoundation/mystic

Mike McKerns
  • 33,715
  • 8
  • 119
  • 139