6

I have this problem which requires solving for X in AX=B. A is of the order 15000 x 15000 and is sparse and symmetric. B is 15000 X 7500 and is NOT sparse. What is the fastest way to solve for X?

I can think of 2 ways.

  1. Simplest possible way, X = A\B
  2. Using for loop,

    invA = A\speye(size(A))
    for i = 1:size(B,2)
        X(:,i) = invA*B(:,i);
    end
    

Is there a better way than the above two? If not, which one is best between the two I mentioned?

Amro
  • 123,847
  • 25
  • 243
  • 454
John Smith
  • 418
  • 7
  • 21
  • 1
    You could always benchmark it with `tic` and `toc` (and if you do, report the results since it would be good to know) – nneonneo Oct 11 '12 at 03:48
  • It takes forever on my machine to run the first case, that is why I didn't report the time :). I will try a smaller matrix and post it here. – John Smith Oct 11 '12 at 03:53
  • Is A positive definite? If so, computing Cholesky decomposition then solve by two applications of "\" is the standard practice. – Yanshuai Cao Oct 11 '12 at 05:32
  • 1
    Is A stored in sparse form? MATLAB needs to be "told" a matrix is sparse to gain from the sparseness. Use the function sparse to do so. –  Oct 11 '12 at 08:30
  • yes, ofcourse I explicitly define A as sparse. I am basically solving a finite element analysis problem and A is the stiffness matrix. – John Smith Oct 11 '12 at 18:57
  • If you do FEM modeling then I encourage you to have a look at MILAMIN and the link I have included in my answer. You will find many of the things mentioned there useful when running FEM simulations in MATLAB. – angainor Oct 11 '12 at 19:10

4 Answers4

9

First things first - never, ever compute inverse of A. That is never sparse except when A is a diagonal matrix. Try it for a simple tridiagonal matrix. That line on its own kills your code - memory-wise and performance-wise. And computing the inverse is numerically less accurate than other methods.

Generally, \ should work for you fine. MATLAB does recognize that your matrix is sparse and executes sparse factorization. If you give a matrix B as the right-hand side, the performance is much better than if you only solve one system of equations with a b vector. So you do that correctly. The only other technical thing you could try here is to explicitly call lu, chol, or ldl, depending on the matrix you have, and perform backward/forward substitution yourself. Maybe you save some time there.

The fact is that the methods to solve linear systems of equations, especially sparse systems, strongly depend on the problem. But in almost any (sparse) case I imagine, factorization of a 15k system should only take a fraction of a second. That is not a large system nowadays. If your code is slow, this probably means that your factor is not that sparse sparse anymore. You need to make sure that your matrix is properly reordered to minimize the fill (added non-zero entries) during sparse factorization. That is the crucial step. Have a look at this page for some tests and explanations on how to reorder your system. And have a brief look at example reorderings at this SO thread.

Community
  • 1
  • 1
angainor
  • 11,760
  • 2
  • 36
  • 56
  • 1
    +1 for the statement "First things first - never, ever compute inverse of A." –  Oct 11 '12 at 08:29
  • Obligatory link for people that go on about not using `inv`: http://arxiv.org/abs/1201.6035 It's summary: if all you're doing is solving Ax=b, `inv` is as safe as Gaussian elimination with partial pivoting, the "right" alternative. Matlab's \ here will use Cholesky decomposition since your matrix is symmetric (as long as it has positive diagonals)---drat, the article doesn't compare Gaussian elimination's stability with Cholesky's. – Ahmed Fasih Jun 16 '14 at 18:57
  • 1
    @AhmedFasih Thank you for this very interesting read. However, first reason not to use it is that `inv(A)` is NOT SPARSE (first paragraph). When `A` is sparse there are much more efficient ways of solving the system (O(n^2) for 3D FEM/FD problems, O(n^1.5) for 2D). So for those people who still insist - no, NEVER use `inv(A)`. Even if you can show that in some cases it is as accurate as sparse factorization. – angainor Jun 16 '14 at 19:36
3

Since you can answer yourself which of the two is faster, I'll try yo suggest the next options. Solve it using a GPU. Plenty of details can be found online, including this SO post, a matlab benchmarking of A/b, etc. Additionally, there's the MATLAB add-on of LAMG (Lean Algebraic Multigrid). LAMG is a fast graph Laplacian solver. It can solve Ax=b in O(m) time and storage.

Community
  • 1
  • 1
bla
  • 25,846
  • 10
  • 70
  • 101
2

If your matrix A is symmetric positive definite, then here's what you can do to solve the system efficiently and stably:

  • First, compute the cholesky decomposition, A=L*L'. Since you have a sparse matrix, and you want to exploit it to accelerate the inversion, you should not apply chol directly, which would destroy the sparsity pattern. Instead, use one of the reordering method described here.
  • Then, solve the system by X = L'\(L\B)

Finally, if are not dealing with potential complex values, then you can replace all the L' by L.', which gives a bit further acceleration because it's just trying to transpose instead of computing the complex conjugate.

Another alternative would be the preconditioned conjugate gradient method, pcg in Matlab. This one is very popular in practice, because you can trade off speed for accuracy, i.e. give it less number of iterations, and it will give you a (usually pretty good) approximate solution. You also never need to store the matrix A explicitly, but just be able to compute matrix-vector product with A, if your matrix doesn't fit into memory.

Yanshuai Cao
  • 1,257
  • 10
  • 14
  • Why do you think `L'` is slower than `L.'`? Did you do some tests? For real numbers those two are the same. – angainor Oct 11 '12 at 08:21
  • @angainor, I remember doing some test a long time ago with an older version of Matlab, maybe things have changed. – Yanshuai Cao Oct 12 '12 at 03:35
1

If this takes forever to solve in your tests, you are probably going into virtual memory for the solve. A 15k square (full) matrix will require 1.8 gigabytes of RAM to store in memory.

>> 15000^2*8
ans =
      1.8e+09

You will need some serious RAM to solve this, as well as the 64 bit version of MATLAB. NO factorization will help you unless you have enough RAM to solve the problem.

If your matrix is truly sparse, then are you using MATLAB's sparse form to store it? If not, then MATLAB does NOT know the matrix is sparse, and does not use a sparse factorization.

How sparse is A? Many people think that a matrix that is half full of zeros is "sparse". That would be a waste of time. On a matrix that size, you need something that is well over 99% zeros to truly gain from a sparse factorization of the matrix. This is because of fill-in. The resulting factorized matrix is almost always nearly full otherwise.

If you CANNOT get more RAM (RAM is cheeeeeeeeep you know, certainly once you consider the time you have wasted trying to solve this) then you will need to try an iterative solver. Since these tools do not factorize your matrix, if it is truly sparse, then they will not go into virtual memory. This is a HUGE savings.

Since iterative tools often require a preconditioner to work as well as possible, it can take some study to find the best preconditioner.