2

First of all, I know that these threads exist! So bear with me, my question is not fully answered by them.

As an example assume we are in a 4-dimensional vector space, i.e R^4. We are looking at the two linear equations:

3*x1 - 2* x2 + 7*x3 - 2*x4 = 6
1*x1 + 3* x2 - 2*x3 + 5*x4 = -2 

The actual questions is: Is there a way to generate a number N of points that solve both of these equations making use of the linear solvers from NumPy etc?

The main problem with all python libraries I have tried so far is: they need n equations for a n-dimensional space

Solving the problem is very easy for one equation, since you can simply use n-1 randomly generated vlaues and adapt the last one such that the vector solves the equation.

My expected result would be a list of N "randomly" generated points that solve k linear equations in an n-dimensional space, where k<n.

3 Answers3

1

A system of linear equations with more variables than equations is known as an underdetermined system.

An underdetermined linear system has either no solution or infinitely many solutions.

...

There are algorithms to decide whether an underdetermined system has solutions, and if it has any, to express all solutions as linear functions of k of the variables (same k as above). The simplest one is Gaussian elimination.

As you say, many functions available in libraries (e.g. np.linalg.solve) require a square matrix (i.e. n equations for n unknowns), what you are looking for is an implementation of Gaussian elimination for non square linear systems.

This isn't 'random', but np.linalg.lstsq (least square) is will solve non-square matrices:

Return the least-squares solution to a linear matrix equation.

Solves the equation a x = b by computing a vector x that minimizes the Euclidean 2-norm || b - a x ||^2. The equation may be under-, well-, or over- determined (i.e., the number of linearly independent rows of a can be less than, equal to, or greater than its number of linearly independent columns). If a is square and of full rank, then x (but for round-off error) is the “exact” solution of the equation.

For more info, see: solving Ax =b for a non-square matrix A using python

Community
  • 1
  • 1
iacob
  • 20,084
  • 6
  • 92
  • 119
1

Since you have an underdetermined system of equations (too few constraints for your solutions, or fewer equations than variables) you can just pick some arbitrary values for x3 and x4 and solve the system in x1, x2 (this has 2 variables/2 equations).

You will just need to check that the resulting system is not inconsistent (i.e. it admits no solution) and that there are no duplicate solutions.

You could for instance fix x3=0 and choosing random values of x4 generate solutions for your equations in x1, x2

Here's an example generating 10 "random" solutions

n = 10
x3 = 0
X = []
for x4 in np.random.choice(1000, n):
  b = np.array([[6-7*x3+2*x4],[-2+2*x3-5*x4]])
  x = np.linalg.solve(a, b)
  X.append(np.append(x,[x3,x4]))

# check solution nr. 3
[x1, x2, x3, x4] = X[3]
3*x1 - 2* x2 + 7*x3 - 2*x4
# output:  6.0
1*x1 + 3* x2 - 2*x3 + 5*x4
# output: -2.0
user2314737
  • 27,088
  • 20
  • 102
  • 114
1

Thanks for the answers, which both helped me and pointed me in the right direction.

I now have an easy step-by-step solution to my problem for arbitrary k<n.

1. Find one solution to all equations given. This can be done by using

 solution_vec = numpy.linalg.lstsq(A,b)

this gives a solution as seen in ukemis answer. In my example above, the Matrix A is equal to the coefficients of the equations on the left side, b represents the vector on the right side.

2. Determine the null space of your matrix A.

These are all vectors v such that the skalar product v*A_i = 0 for every(!) row A_i of A. The following function, found in this thread can be used to get representatives of the null space of A:

def nullSpaceOfMatrix(A, eps=1e-15):
    u, s, vh = scipy.linalg.svd(A)
    null_mask = (s <= eps)
    null_space = scipy.compress(null_mask, vh, axis=0)
    return scipy.transpose(null_space)  

3. Generate as many (N) "random" linear combinations (meaning with random coefficients) of solution_vec and resulting vectors of the nullspace of the matrix as you want! This works because the scalar product is additive and nullspace vectors have a scalar product of 0 to the vectors of the equations. Those linear combinations always must contain solution_vec, as in:

linear_combination = solution_vec + a*null_spacevec_1 + b*nullspacevec_2...

where a and b can be randomly chosen.