3

According to the official user guide line, sgelsd is used to solve the least square problem

min_x || b - Ax ||_2

and allows matrix A to be rectangle and rank-deficient. And according to the interface description in the sgelsd source code, b is used as input-output parameter. When sgelsd is finished, b stores the solution. So b occupies m*sizeof(float) bytes. While the solution x needs n*sizeof(float) bytes (assume A is a m*n matrix, and b is a m*1 vector).

However, when n>m, the memory of b is too small to store the solution x. How to deal with this situation? I did not get it from the comments of sgelsd source code. Can I just allocate n*sizeof(float) bytes for b and use the first m*sizeof(float) to store the b vector?

Thanks.

coinsyx
  • 643
  • 2
  • 7
  • 13

1 Answers1

2

This example from Intel MKL has the answer. B is allocated as LDB*NRHS (LDB = max(M,N), and zero-padded. Note that the input B is not necessarily a 1-vector, SGELSD can handle multiple least-squares problems at the same time (hence NRHS).

From the Lapack docs for SGELSD:

[in,out] B

      B is REAL array, dimension (LDB,NRHS)
      On entry, the M-by-NRHS right hand side matrix B.
      On exit, B is overwritten by the N-by-NRHS solution
      matrix X.  If m >= n and RANK = n, the residual
      sum-of-squares for the solution in the i-th column is given
      by the sum of squares of elements n+1:m in that column.

[in] LDB

      LDB is INTEGER
      The leading dimension of the array B. LDB >= max(1,max(M,N)).
Community
  • 1
  • 1
Alex I
  • 19,689
  • 9
  • 86
  • 158