18

Given a function f(x) that takes an input vector x and returns a vector of the same length, how can you find the roots of the function setting constraints on x? (E.g. a range for each component of x.)

To my surprise I could not find a lot of useful information about this. In the scipy list for Optimization and Root finding algorithms there seem to be some options for scalar functions such as brentq. I can not find any algorithms that supports such an option for the multivariate case though.

Of course one could do a work-around like squaring each component of the returned vector and then use one of the minimizers such as differential_evolution (this is the only one I think actually). I can not imagine that this is a good strategy though, since it kills the quadratic convergence of Newton's algorithm. Also I find it really surprising that there does not seem to be an option for this, since it must be a really common problem. Have I missed something?

Wolpertinger
  • 1,169
  • 2
  • 13
  • 30
  • Maybe I'm wrong but in the link you posted, just under 'Scalar' you can find 'Multidimensional' which looks like what your looking for: https://docs.scipy.org/doc/scipy/reference/optimize.html#multidimensional – Jan Zeiseweis May 18 '17 at 08:44
  • 2
    @JanZeiseweis As far as I could see, there is no option to use constrains in any of the multidimensional solvers that are listed. I guess that is what Wolpertinger is asking about. – jotasi May 18 '17 at 08:51
  • @jotasi That's true, totally forgot about this. Thanks – Jan Zeiseweis May 18 '17 at 09:27
  • In principle it's an interesting question, but for StackOverflow, unfortunately, I think it is a combination of too broad, unclear what is asked and asking for tools/resources. Just imagine, what you want may depend on your specific problem. – NoDataDumpNoContribution May 18 '17 at 14:56
  • Is there a multivariate solver with constraints looks to me like asking for a resource. If I leave that out I can't really find a question, but that may just be me. – NoDataDumpNoContribution May 18 '17 at 20:24
  • 1
    @Trilarion While I do not agree with the tone of Wolpertinger's response, I tend to disagree with you, as well. If you would formulate his question as "How to solve a multivariate root finding problem with constraints?" and tag it with python and maybe scipy, I do not see how this is not a clear question/asking for off-site resources. For the too broad part, I guess a specific example might improve the question, but overall I think it is a proper question. – jotasi May 19 '17 at 08:10
  • 1
    CVXOPT is a fairly rigorous library for constrained convex optimization. You should definitely check it out: http://cvxopt.org/ – Gerges May 21 '17 at 02:19
  • @GergesDib thanks! having had a brief look at the documentation I can only find optimisation problems. Does it also support root finding? – Wolpertinger May 21 '17 at 09:19

3 Answers3

12

One (not particularly nice but hopefully working) option to work around this problem would be to give the solver a function that only has roots in the constrained region and that is continued in a way ensuring that the solver is pushed back in the proper region (a little bit like here but in multiple dimensions).

What one might do to achieve this (at least for rectangular constraints) is to implement a constrainedFunction that is linearly continued starting from the border value of your function:

import numpy as np

def constrainedFunction(x, f, lower, upper, minIncr=0.001):
     x = np.asarray(x)
     lower = np.asarray(lower)
     upper = np.asarray(upper)
     xBorder = np.where(x<lower, lower, x)
     xBorder = np.where(x>upper, upper, xBorder)
     fBorder = f(xBorder)
     distFromBorder = (np.sum(np.where(x<lower, lower-x, 0.))
                      +np.sum(np.where(x>upper, x-upper, 0.)))
     return (fBorder + (fBorder
                       +np.where(fBorder>0, minIncr, -minIncr))*distFromBorder)

You can pass this function an x value, the function f that you want to continue, as well as two arrays lower and upper of the same shape like x giving the lower and upper bounds in all dimensions. Now you can pass this function rather than your original function to the solver to find the roots.

The steepness of the continuation is simply taken as the border value at the moment to prevent steep jumps for sign changes at the border. To prevent roots outside the constrained region, some small value is added/substracted to positive/negative boundary values. I agree that this is not a very nice way to handle this, but it seems to work.

Here are two examples. For both the initial guess is outside the constrained region but a correct root in the constrained region is found.

Finding the roots of a multidimensional cosine constrained to [-2, -1]x[1, 2] gives:

from scipy import optimize as opt

opt.root(constrainedFunction, x0=np.zeros(2),
         args=(np.cos, np.asarray([-2., 1.]), np.asarray([-1, 2.])))

gives:

    fjac: array([[ -9.99999975e-01,   2.22992740e-04],
       [  2.22992740e-04,   9.99999975e-01]])
     fun: array([  6.12323400e-17,   6.12323400e-17])
 message: 'The solution converged.'
    nfev: 11
     qtf: array([ -2.50050470e-10,  -1.98160617e-11])
       r: array([-1.00281376,  0.03518108, -0.9971942 ])
  status: 1
 success: True
       x: array([-1.57079633,  1.57079633])

This also works for functions that are not diagonal:

def f(x):
    return np.asarray([0., np.cos(x.sum())])

opt.root(constrainedFunction, x0=np.zeros(2),
         args=(f, np.asarray([-2., 2.]), np.asarray([-1, 4.])))

gives:

    fjac: array([[ 0.00254922,  0.99999675],
       [-0.99999675,  0.00254922]])
     fun: array([  0.00000000e+00,   6.12323400e-17])
 message: 'The solution converged.'
    nfev: 11
     qtf: array([  1.63189544e-11,   4.16007911e-14])
       r: array([-0.75738638, -0.99212138, -0.00246647])
  status: 1
 success: True
       x: array([-1.65863336,  3.22942968])
jotasi
  • 5,077
  • 2
  • 29
  • 51
  • thank you for your great answer! I really like how you just connect to the existing solver with this trick. I am currently debating whether to give the bounty to you or Quentin, since I like both answers a lot. – Wolpertinger May 19 '17 at 13:51
  • @Wolpertinger Glad I could help. Honestly, I was kind of surprised that there does not seem to be a build in scipy function for this. My guess is that for general constraints (not just several intervals) solving the problem as well as having a nice API-design is probably rather difficult. – jotasi May 19 '17 at 13:58
9

If you want to handle an optimization with constraints, you can use the facile lirbary, which is a lot easier than scipy.optimize

Here is the link to the package :

https://pypi.python.org/pypi/facile/1.2

Here's how to use the facile library for your example. You will need to refine what I write here, which is only general. If you have Errors raised, tell me which.

import facile

# Your vector x 

x = [ facile.variable('name', min, max) for i in range(Size) ]


# I give an example here of your vector being ordered and each component in a range
# You could as well put in the range where declaring variables

for i in range(len(x)-1):
    facile.constraint( x[i] < x[i+1])
    facile.constraint( range[i,0] < x[i] < range[i,1] ) #Supposed you have a 'range' array where you store the range for each variable


def function(...)
 # Define here the function you want to find roots of


 # Add as constraint that you want the vector to be a root of function
facile.constraint(function(x) == 0)


# Use facile solver
if facile.solve(x):
    print [x[i].value() for i in range(len(x))]
else:
    print "Impossible to find roots"
  • +1 and thank you for your answer, especially the new example. I will try and see if this works for my example. – Wolpertinger May 19 '17 at 13:52
  • one more question: can this handle complex numbers? (if not i'll just separate real and imaginary part) – Wolpertinger May 20 '17 at 09:20
  • Good question ... I don't think it is possible. When declaring a variable, you can use an argument for precising which type you want to use, maybe complex numbers are included. there's nothing about it in the documentation. – Quentin Brzustowski May 22 '17 at 07:41
4

At the risk of suggesting something you might've already crossed off, I believe this should feasible with just scipy.minimize. The catch is that the function must have only one argument, but that argument can be a vector/list.

So f(x, y) becomes just f(z) where z = [x, y].

A good example that you might find useful if you haven't come across is here.

If you want to impose bounds, as you mentioned, for a 2x1 vector, you could use:

# Specify a (lower, upper) tuple for each component of the vector    
bnds = [(0., 1.) for i in len(x)]

And use this as the bounds parameter within minimize.

Brad Solomon
  • 38,521
  • 31
  • 149
  • 235
  • I upvoted this initially, but somehow I missed that this is just optimization again, not root finding. So I don't think it answers the question actually, which is *explicitly* not about optimization. – Wolpertinger May 23 '17 at 08:18
  • 1
    Ah, sorry--there is a simple fix. **Take whatever your function is, and make it return the absolute value** of whatever it is currently returning. This will get you your roots as the function will be minimized at zero. I.e. `f(x**3)` just becomes `return abs(x**3)`. – Brad Solomon May 23 '17 at 09:40
  • 2
    I *explicitly* said in the question that I don't want that, because it kills the quadratic convergence of Newton's algorithm... – Wolpertinger May 23 '17 at 10:56
  • I don't see any _explicit_ references to taking an absolute value, but then again this over my head mathematically. Good luck. – Brad Solomon May 23 '17 at 11:07