7

I would like to use scipy.optimize to minimize a function (eventually non-linear) over a large set of linear inequalities. As a warm-up, I'm trying to minimize x+y over the box 0<=x<=1, 0<=y<=1. Following the suggestion of Johnny Drama below, I am currently using a dict-comprehesion to produce the dictionary of inequalities, but am not getting the expected answer (min value=0, min at (0,0)).

New section of code (currently relevant):

import numpy as np
from scipy.optimize import minimize



#Create initial point.

x0=[.1,.1]

#Create function to be minimized

def obj(x):
    return x[0]+x[1]


#Create linear constraints  lbnd<= A*(x,y)^T<= upbnd

A=np.array([[1,0],[0,1]])

b1=np.array([0,0])

b2=np.array([1,1])

cons=[{"type": "ineq", "fun": lambda x: np.matmul(A[i, :],x) -b1[i]} for i in range(A.shape[0])]

cons2=[{"type": "ineq", "fun": lambda x: b2[i]-np.matmul(A[i, :], x) } for i in range(A.shape[0])]

cons.extend(cons2)

sol=minimize(obj,x0,constraints=cons)

print(sol)

Original version of question:

I would like to use the LinearConstraint object in scipy.optimize, as described in the tutorial here: "Defining linear constraints"

I've tried to do a simpler example, where it's obvious what the answer should be: minimize x+y over the square 0<=x<=1, 0<=y<=1. Below is my code, which returns the error "'LinearConstraint' object is not iterable", but I don't see how I'm trying to iterate.

EDIT 1: The example is deliberately over simple. Ultimately, I want to minimize a non-linear function over a large number of linear constraints. I know that I can use dictionary comprehension to turn my matrix of constraints into a list of dictionaries, but I'd like to know if "LinearConstraints" can be used as an off-the-shelf way to turn matrices into constraints.

EDIT 2: As pointed out by Johnny Drama, LinearConstraint is for a particular method. So above I've tried to use instead his suggestion for a dict-comprehension to produce the linear constraints, but am still not getting the expected answer.

Original section of code (now irrelevant):

from scipy.optimize import minimize
from scipy.optimize import LinearConstraint


#Create initial point.

x0=[.1,.1]

#Create function to be minimized

def obj(x):
    return x[0]+x[1]


#Create linear constraints  lbnd<= A* 
#(x,y)^T<= upbnd

A=[[1,0],[0,1]]

lbnd=[0,0]

upbnd=[0,0]

lin_cons=LinearConstraint(A,lbnd,upbnd)

sol=minimize(obj,x0,constraints=lin_cons)

print(sol)
TBRS
  • 81
  • 1
  • 2
  • 6

3 Answers3

15

As newbie already said, use scipy.optimize.linprog if you want to solve a LP (linear program), i.e. your objective function and your constraints are linear. If either the objective or one of the constraints isn't linear, we are facing a NLP (nonlinear optimization problem), which can be solved by scipy.optimize.minimize:

minimize(obj_fun, x0=xinit, bounds=bnds, constraints=cons)

where obj_fun is your objective function, xinit a initial point, bnds a list of tuples for the bounds of your variables and cons a list of constraint dicts.


Here's an example. Suppose we want to solve the following NLP:

enter image description here

Since all constraints are linear, we can express them by a affin-linear function A*x-b such that we have the inequality A*x >= b. Here A is a 3x2 matrix and b the 3x1 right hand side vector:

import numpy as np
from scipy.optimize import minimize

obj_fun = lambda x: (x[0] - 1)**2 + (x[1] - 2.5)**2
A = np.array([[1, -2], [-1, -2], [-1, 2]])
b = np.array([-2, -6, -2])
bnds = [(0, None) for i in range(A.shape[1])]  # x_1 >= 0, x_2 >= 0
xinit = [0, 0] 

Now the only thing left to do is defining the constraints, each one has to be a dict of the form

{"type": "ineq", "fun": constr_fun}

where constr_fun is a callable function such that constr_fun >= 0. Thus, we could define each constraint

cons = [{'type': 'ineq', 'fun': lambda x:  x[0] - 2 * x[1] + 2},
        {'type': 'ineq', 'fun': lambda x: -x[0] - 2 * x[1] + 6},
        {'type': 'ineq', 'fun': lambda x: -x[0] + 2 * x[1] + 2}]

and we'd be done. However, in fact, this can be quite cumbersome for many constraints. Instead, we can pass all constraints directly by:

cons = [{"type": "ineq", "fun": lambda x: A @ x - b}]

where @ denotes the matrix multiplication operator. Putting all together

res = minimize(obj_fun, x0=xinit, bounds=bnds, constraints=cons)
print(res)

yields

     fun: 0.799999999999998
     jac: array([ 0.79999999, -1.59999999])
 message: 'Optimization terminated successfully.'
    nfev: 16
     nit: 4
    njev: 4
  status: 0
 success: True
       x: array([1.39999999, 1.69999999])

Likewise, you could use a LinearConstraint object:

from scipy.optimize import LinearConstraint

# lb <= A <= ub. In our case: lb = b, ub = inf
lincon = LinearConstraint(A, b, np.inf*np.ones(3))

# rest as above
res = minimize(obj_fun, x0=xinit, bounds=bnds, constraints=(lincon,))

Edit: To answer your new question:

# b1    <= A * x   <==>   -b1 >= -A*x        <==>   A*x - b1 >= 0
# A * x <= b2      <==>    A*x - b2 <= 0     <==>  -Ax + b2 >= 0
cons = [{"type": "ineq", "fun": lambda x: A @ x - b1}, {"type": "ineq", "fun": lambda x: -A @ x + b2}]
sol=minimize(obj,x0,constraints=cons)
print(sol)
joni
  • 6,840
  • 2
  • 13
  • 20
  • 1
    Thanks. To clarify, the example I gave is deliberately overly simple. In practice, I have a non-linear objective function, and a large list of linear constraints, which I already have packaged in a numpy array. Yes, I could do it "by hand", using a dictionary comprehension. But then what is "LinearConstraint" good for? Isn't it supposed to somehow produces these constraints for you from a matrix? – TBRS Aug 24 '18 at 11:48
  • 1
    [See here](https://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html#constrained-minimization-of-multivariate-scalar-functions-minimize): *The method 'trust-constr' requires the constraints to be defined as a sequence of objects `LinearConstraint` and `NonlinearConstraint`. Methods 'SLSQP' and 'COBLYA', on the other hand, require constraints to be defined as a sequence of dictionaries, with keys `type`, `fun` and `jac`.* So the `LinearConstraint` object is only needed if you want to use the trust-region constrained method ('trust-constr'). – joni Aug 24 '18 at 11:57
  • OK, that helps. It seems you have to provide Jacobian and Hessian too. For now, I'll try your dict-comprehension. – TBRS Aug 24 '18 at 12:16
  • Thanks again. I've done what you suggested in terms of dict-comprehesion, but seem to be getting the wrong answer. It seems that the constraints are not being correctly read. I've added an example in the original post. – TBRS Aug 24 '18 at 12:37
  • By the way, according to https://docs.scipy.org/doc/scipy/reference/optimize.html, "inequality means that it is to be non-negative", whereas in your post you say "where constr_fun is a callable function such that constr_fun <= 0 " (so non-positive). Presumably this is a typo. – TBRS Aug 24 '18 at 13:14
  • You're right. To answer your second question: Unfortunately the way of passing all constraints at once was wrong in my previous answer. I edited my answer. – joni Aug 24 '18 at 14:54
  • OK. That seems to work. Do you know what the original version was doing? – TBRS Aug 24 '18 at 14:58
  • With the previous way each dict's function in the `cons` list was the same. [See this answer for more details](https://stackoverflow.com/questions/233673/how-do-lexical-closures-work). – joni Aug 24 '18 at 15:06
  • @joni You are right that the SciPy documentation kind of implies that the `LinearConstraint` and `NonlinearConstraint` objects are for the `trust-constr` algorithm, but I have been using the `NonlinearConstraint` object with SLSQP and it is working. Does anyone know if the `LinearConstraint` object works with SLSQP too? I will try to find out. – ares Aug 01 '19 at 16:57
1

The error lies in the way you call the minimize function

sol= minimize(obj, x0, constraints=lin_cons)

Indeed, constraints expects a dictionary or a list of dictionary, see http://scipy.optimize.minimize.

For your specific LP I would write something like:

from scipy.optimize import linprog
import numpy as np

c = np.array([1, 1])

res = linprog(c, bounds=(0, 1))

print('Optimal value: {}'.format( res.fun))
print('Values: {}'.format(res.x))

which outputs

Optimal value: -0.0
Values: [ 0.  0.]

as there are no constraints.

Suppose you want to add a constraint x + y >= 0.5 (which is equivalent to -x - y <= -0.5). Then your Lp becomes:

c = np.array([1, 1])

A_ub = np.array([[-1,-1]])

b_ub = np.array([-0.5])

res = linprog(c, A_ub=A_ub, b_ub=b_ub, bounds=(0, 1))

print('Optimal value: {}'.format( res.fun))
print('Values: {}'.format(res.x))

which now outputs:

Optimal value: 0.5
Values: [ 0.5  0. ]
abc
  • 11,579
  • 2
  • 26
  • 51
  • 1
    Thanks, but I don't particularly want to do linear programming, since I'll ultimately want a non-linear objective function. You are right that I need some kind of dictionary for constraints, but that is not clear from the tutorial to which I linked, and I don't yet see how to put LinearConstraint into a dictionary. – TBRS Aug 24 '18 at 11:01
0

@Joni:

b1 <= A * x <==> -b1 >= -Ax <==> Ax - b1 >= 0

A * x <= b2 <==> A*x - b2 <= 0 <==> -Ax + b2 >= 0

cons = [{"type": "ineq", "fun": lambda x: A @ x - b1}, {"type": "ineq", "fun": lambda x: -A @ x + b2}]

sol=minimize(obj,x0,constraints=cons) print(sol)

It is a really interesting solution and it works quite well for me. I am trying to do the same thing but having both equality and inequality constraint:

So I have: Aeq @ x - b =0 and it workss fine but when i add A_ineq @ x - Lb and Ub - A_ineq @ x it doesn't seem to work because Aeq and AIneq are not the same dimensions:

def DefineLinearConstraint(Aeq, b, Aineq, Lb, Ub): constraints = [{"type": "eq", "fun": lambda x: Aeq @ x - b, {"type": "ineq", "fun": lambda x: Aineq @ x - Lb}, {"type": "ineq", "fun": lambda x: -Aineq @ x + Ub}]

I have the following variables: Aeq = array([1, 1, 1, 1, 1], dtype=int64) x0 = array([[0.2], [0.2], [0.2],[0.2], [0.2]], dtype=object) and x the same dimension as x0, b = array([1], dtype=object)

for the inequality constraint : Aineq = array([[1, 1, 1, 0, 0], [0, 0, 0, 1, 1]], dtype=int64)

Lb = array([[0], [0]], dtype=object) and Ub = array([[1], [1]], dtype=object)

The idea is to add group constraint and in my example I have 2 group constraints that i would like to be able to modify.

  • 1
    Please, try to improve the formatting of your answer. – Xbel Oct 08 '21 at 09:09
  • Hello Xbel, sorry for the formating, i code on my office computer and because of proxy ban i can not write from my computer and have to do it from my Phone which makes it difficult to make a better format. Sorry for the inconcenience ... – Milan Fabien Oct 08 '21 at 09:59
  • Should be formatted answer – Beso Oct 08 '21 at 10:38
  • No worries. It's just it's much harder to read... – Xbel Oct 09 '21 at 10:24