Per the docs,
linalg.solve
is used to compute the "exact" solution, x
, of the
well-determined, i.e., full rank, linear matrix equation ax = b
.
Being
linear, there can be at most one solution. If the solution you found does not
sum up to 1, then adding the extra constraint would yield no solution.
However, you could use
scipy.optimize.minimize
to find the point on the constraint plane which minimizes the quantity
||Ax-b||^2
:
def f(x):
y = np.dot(A, x) - b
return np.dot(y, y)
cons = ({'type': 'eq', 'fun': lambda x: x.sum() - 1})
res = optimize.minimize(f, [0, 0, 0], method='SLSQP', constraints=cons,
options={'disp': False})
For example, given this system of equations
import numpy as np
import numpy.linalg as LA
import scipy.optimize as optimize
A = np.array([[1, 3, 4], [5, 6, 9], [1, 2, 3]])
b = np.array([1, 2, 1])
x = LA.solve(A, b)
The solution does not add up to 1:
print(x)
# [-0.5 -1.5 1.5]
But you could try to minimize f
:
def f(x):
y = np.dot(A, x) - b
return np.dot(y, y)
subject to the constraint cons
:
cons = ({'type': 'eq', 'fun': lambda x: x.sum() - 1})
res = optimize.minimize(f, [0, 0, 0], method='SLSQP', constraints=cons,
options={'disp': False})
xbest = res['x']
# array([ 0.30000717, 1.89998823, -1.1999954 ])
xbest
sums to 1:
print(xbest.sum())
1
The difference A·xbest - b
is:
print(np.dot(A, xbest) - b)
# [ 0.19999026 0.10000663 -0.50000257]
and the sum of the squares of the difference, (also computable as f(xbest)
) is :
print(res['fun'])
0.30000000014542572
No other value of x minimizes this quantity more while satisfying the constraint.