-1

I have a problem where I have a vector "c" which I know its values and it size is 100, and I want my code to find a vector "x" of size 30 and a matrix "y" of size (30,100), so that x*y= c and the norm of "x" and "y" are minimal. I want to know the algorithm, so that I can do this for any c or change the dimensions of the vectors and matrices.

I tried using sympy and linalg.lstsq, but as far as I saw, both don't solve the problem for x and y at the same time. And tried one approach by making x to be random with a seed, but that is something I want to avoid since what I want is a method that adjust both x and y from a data given by the user and don't depend on data that doesn't come from the user (or I'll will have to give unique seeds for each user).

Thanks for your help

2 Answers2

1

You seem to be looking for the pseudo-inverse of the matrix. NumPy can handle those with linalg.pinv.

However, if you're expecting to get a unique answer for x, you should keep in mind that the matrix rank of y is at most 30. In other words, you can't have a bijection between the spaces of your 100-dimensional vectors and your 30-dimensional vectors.

Still, if you're interested in coding in the pseudo-inverses, you can write something like :

import numpy as np

c = np.random.rand(100)
y = np.random.rand(30,100)

x = c.dot(np.linalg.pinv(y))
Chris Mzz.
  • 11
  • 3
0

Well, I got a solution for the problem from here: Solving least-norm problem with Bilinear constraints

Using the equation of ||a||= squared root of ||c||, I can use this to solve an optimization problem for vector x

def optimize(c,layerSize):
    def objective(x):
      n = len(x)
      dist_sum = 0
      for i in range(n):
          for j in range(i+1, n):
              dist_sum += np.abs(x[i] - x[j])
      return -dist_sum  # Objetive to make elements of 'x' different between them

    def constraint(x, c):
      return np.linalg.norm(x) - np.sqrt(np.linalg.norm(c)) #

    # Initial values for x
    initial_guess = np.zeros(layerSize)

    # Define Optimization problem
    problem = {
        'type': 'eq',
        'fun': constraint,
        'args': (c,)
    }

    # Resolve optimization problem
    solution = minimize(objective, initial_guess, constraints=problem)

    # Find optimal vector x
    x_opt = solution.x
    y_opt = resolve(x_opt,c)
    
    return x_opt,y_opt

Then use another optimization problem to find y_opt from vector x and c

def resolve(x_opt,c):
    # Dimensions
    n = len(x_opt)
    d = len(c)

    def objective2(y):
        return np.linalg.norm(y)

    # Initial values for y
    initial_guess2 = np.zeros((n,d))

    # Define optimization problem
    problem2 = {
        'type': 'eq',
        'fun': lambda y: np.dot(x_opt, y.reshape((n, d))).flatten() - c
    }

    # Resolve Optimization problem
    solution = minimize(objective2, initial_guess2, constraints=problem2)

    # Find optimal matrix y
    y_opt = solution.x.reshape((n, d))
    return y_opt

With these, given a vector c, you can find a vector 'x' and a matrix 'y' that satisfy the system, the resolve function is to avoid the round errors of the computation of 'x', otherwise you can compute y_opt using the equation given in the post