One (not particularly nice but hopefully working) option to work around this problem would be to give the solver a function that only has roots in the constrained region and that is continued in a way ensuring that the solver is pushed back in the proper region (a little bit like here but in multiple dimensions).
What one might do to achieve this (at least for rectangular constraints) is to implement a constrainedFunction
that is linearly continued starting from the border value of your function:
import numpy as np
def constrainedFunction(x, f, lower, upper, minIncr=0.001):
x = np.asarray(x)
lower = np.asarray(lower)
upper = np.asarray(upper)
xBorder = np.where(x<lower, lower, x)
xBorder = np.where(x>upper, upper, xBorder)
fBorder = f(xBorder)
distFromBorder = (np.sum(np.where(x<lower, lower-x, 0.))
+np.sum(np.where(x>upper, x-upper, 0.)))
return (fBorder + (fBorder
+np.where(fBorder>0, minIncr, -minIncr))*distFromBorder)
You can pass this function an x
value, the function f
that you want to continue, as well as two arrays lower
and upper
of the same shape like x
giving the lower and upper bounds in all dimensions. Now you can pass this function rather than your original function to the solver to find the roots.
The steepness of the continuation is simply taken as the border value at the moment to prevent steep jumps for sign changes at the border. To prevent roots outside the constrained region, some small value is added/substracted to positive/negative boundary values. I agree that this is not a very nice way to handle this, but it seems to work.
Here are two examples. For both the initial guess is outside the constrained region but a correct root in the constrained region is found.
Finding the roots of a multidimensional cosine constrained to [-2, -1]x[1, 2] gives:
from scipy import optimize as opt
opt.root(constrainedFunction, x0=np.zeros(2),
args=(np.cos, np.asarray([-2., 1.]), np.asarray([-1, 2.])))
gives:
fjac: array([[ -9.99999975e-01, 2.22992740e-04],
[ 2.22992740e-04, 9.99999975e-01]])
fun: array([ 6.12323400e-17, 6.12323400e-17])
message: 'The solution converged.'
nfev: 11
qtf: array([ -2.50050470e-10, -1.98160617e-11])
r: array([-1.00281376, 0.03518108, -0.9971942 ])
status: 1
success: True
x: array([-1.57079633, 1.57079633])
This also works for functions that are not diagonal:
def f(x):
return np.asarray([0., np.cos(x.sum())])
opt.root(constrainedFunction, x0=np.zeros(2),
args=(f, np.asarray([-2., 2.]), np.asarray([-1, 4.])))
gives:
fjac: array([[ 0.00254922, 0.99999675],
[-0.99999675, 0.00254922]])
fun: array([ 0.00000000e+00, 6.12323400e-17])
message: 'The solution converged.'
nfev: 11
qtf: array([ 1.63189544e-11, 4.16007911e-14])
r: array([-0.75738638, -0.99212138, -0.00246647])
status: 1
success: True
x: array([-1.65863336, 3.22942968])