3

I would like to optimize the following formula with scipy adding the constraint of x[0] - x[1] > 0. When printing this expression in the objective function it gives negative values as well, while the optimization terminates successfully. The final goal would be something like minimizing the sqrt(0.1*x[0]*x[1]) which due to math error fails.

import numpy as np
from scipy.optimize import minimize


def f(x):

    print x[0] - x[1]
    #return sqrt(0.1*x[0]*x[1])
    return 0.1*x[0]*x[1]

def ineq_constraint(x):

    return x[0] - x[1]


con = {'type': 'ineq', 'fun': ineq_constraint}
x0 = [1, 1]
res = minimize(f, x0, method='SLSQP', constraints=con)

print res

And the output:

0.0
0.0
1.49011611938e-08
-1.49011611938e-08
0.0
0.0
1.49011611938e-08
-1.49011611938e-08
0.0
0.0
1.49011611938e-08
-1.49011611938e-08
4.65661176285e-10
4.65661176285e-10
1.53668223701e-08
-1.44355000176e-08
     fun: 1.7509862319755833e-18
     jac: array([  3.95812066e-10,   4.42378184e-10,   0.00000000e+00])
 message: 'Optimization terminated successfully.'
    nfev: 16
     nit: 4
    njev: 4
  status: 0
 success: True
       x: array([  4.42378184e-09,   3.95812066e-09])
  • In your solution ```x[0] - x[1] >= 0```, what's wrong? – sascha Dec 06 '17 at 20:36
  • As far as your final goal is concerned, a minimum of 0.1*x[1]*x[1] is also a minimum of the square root of this function. You've struck lucky! – Bill Bell Dec 06 '17 at 20:50
  • Thank you all for the comments! @BillBell, this was a dummy simplification of a multivariate problem I am working on - in that case, unfortunately, luck is not an option. – Milán Janosov Dec 06 '17 at 21:01
  • @sascha, my problem is that I would like to do operations (e.g. the sqrt in #return sqrt(0.1*x[0]*x[1]) which can lead to math error under certain conditions. I wanted to use constraints to disregard these situations during the optimization, but apparently, conditions only apply to the final solution? How could I resolve that? – Milán Janosov Dec 06 '17 at 21:01
  • I was being ironic. Just leave out the square root, as long as it encloses the entire function being optimised. The result will be the same. And you won't need any constraints that you throw in, in attempts to keep the function within the domain of the square root. – Bill Bell Dec 06 '17 at 21:12

1 Answers1

3

In the general case, we don't know your whole task, constraints are not enforced at all steps (as observed)! Without changing the optimizer there is not much to do. And even finding an appropriate optimizer is maybe not easy.

For your case, it would work if your variables are nonnegative! If that's something you can use in your other task, we don't know.

Now there are two approaches for nonnegativity:

  • inequalities
  • bounds

Using bounds, explicit handling is used (as far is i know) and those will not be violated during optimization.

Example:

import numpy as np
from scipy.optimize import minimize
from math import sqrt

def f(x):
    print(x)
    return sqrt(0.1*x[0]*x[1])

def ineq_constraint(x):
    return x[0] - x[1]

con = {'type': 'ineq', 'fun': ineq_constraint}

x0 = [1, 1]
res = minimize(f, x0, method='SLSQP', constraints=con, bounds=[(0, None) for i in range(len(x0))])
print(res)

Output:

[1. 1.]
[1. 1.]
[1.00000001 1.        ]
[1.         1.00000001]
[0.84188612 0.84188612]
[0.84188612 0.84188612]
[0.84188613 0.84188612]
[0.84188612 0.84188613]
[0.05131671 0.05131669]
[0.05131671 0.05131669]
[0.05131672 0.05131669]
[0.05131671 0.0513167 ]
[0. 0.]
[0. 0.]
[1.49011612e-08 0.00000000e+00]
[0.00000000e+00 1.49011612e-08]
     fun: 0.0
     jac: array([0., 0.])
 message: 'Optimization terminated successfully.'
    nfev: 16
     nit: 4
    njev: 4
  status: 0
 success: True
       x: array([0., 0.])
sascha
  • 32,238
  • 6
  • 68
  • 110
  • Thank you very much for your reply - learning that constraints are not enforced at all steps is very useful to know. I have also considered bounds, but I couldn't find a way out with them either. The term of which I have to take the sqrt (and also the log) in the objective function is a complicated nonlinear function of 6 variables. According to scipy docs, bounds only apply to the variables we are optimizing. I thought of something like catching the exception in case of MathError in f and just pass before return, but that seems very shallow. – Milán Janosov Dec 06 '17 at 21:51
  • @MilánJanosov Sure, you can try catch these and output some value as proxy. But this needs somewhat to play with smoothness assumption. but it's worth a try. – sascha Dec 06 '17 at 22:32