31

I have an (n x n) symmetric matrix A and a (n x 1) vector B. Basically, I just need to solve Ax = b for x. The issue is that A is going to potentially be massive. So I'm looking for the most efficient algorithm for solving linear equations in C++. I looked over the Eigen library. Apparently it has an SVD method, but I've been told it's slow. Solving x=inverse(A)*b also seems like it would be suboptimal. Is uBLAS faster? Are there any more efficient methods? Thanks.

Edit: matrix A is positive definite and not sparse.

aesir
  • 565
  • 2
  • 13
  • 23
  • 1
    Umm... Gaussian elimination? – Eutherpy Nov 24 '13 at 22:32
  • 1
    Is your matrix positive definite? Then (P)CG might be an option for you. Other than that, have a look at iterative methods (Gauss-Seidel, Jacobi and so on) instead of direct solver if your system is too large. – Zeta Nov 24 '13 at 22:36
  • 1
    What is your average matrix size? Is your matrix sparse? – ggael Nov 24 '13 at 23:07
  • 7
    Speaking from personal experience, stability is a much greater concern than speed. Not much point in a fast solver when it produces garbage numbers and real world problems are often unstable. Gaussian sucked, it accumulates rounding error badly, got excellent results from LU decomposition. Not much point left in spinning your own anymore, try them all and don't forget to look for correct results. – Hans Passant Nov 24 '13 at 23:28
  • 1
    This should be the right thing for you: [Eigen Cholesky](http://eigen.tuxfamily.org/dox/group__Cholesky__Module.html) (maybe, the variant with pivoting for stability). – Tobias Jun 08 '14 at 16:55
  • If your matrices are huge but well-conditioned then you can use [iterative techniques](http://en.wikipedia.org/wiki/Conjugate_gradient_method) and [incomplete Cholesky decomposition](http://en.wikipedia.org/wiki/Incomplete_Cholesky_factorization) as pre-conditioner. That should be faster than any complete factorization for well-conditioned matrices. – Tobias Jun 08 '14 at 17:02
  • Also, don't invert A and then multiply by B. Use a method (like Eigen's `solve()`) that solves the whole system more efficiently. – Ela782 Jan 27 '15 at 11:36

1 Answers1

54

The best way to solve a system of linear equations of the form Ax = b is to do the following.

  • decompose A into the format A = M1 * M2 (where M1 and M2 are triangular)
  • Solve M1 * y = b for y using back substitution
  • Solve M2 * x = y for x using back substitution

For square matrices, step 1 would use LU Decomposition.

For non square matrices, step 1 would use QR Decomposition.

If matrix A is positive definite and not sparse you'd use Cholesky Decomposition for the first step.

If you want to use eigen, you will have to first decompose it and then triangular solve it.

If this is still slow, thankfully, there are numerous linear algebra libraries available that can help improve the performance. The routine you should be looking for is dpotrs. Some libraries that have this implemented are as follows:

If you are using eigen in the overall project, you can interface the LAPACK routine you need as described here.

Pavan Yalamanchili
  • 12,021
  • 2
  • 35
  • 55
  • 1
    Excellent answer. There is also Eigen which is C++ header only lib: Eigen::Matrix A; Eigen::Matrix b; // ... load A, b Eigen::Matrix x = A.fullPivLu().solve(b); fullPivLU() does A = L*U but there are about 10 different decomposition you can choose from. – wcochran Sep 28 '18 at 16:14