3

I am developing an algorithm which solves Ax= b, where A and b are known.

There are two ways to do this x= A-1 b or using Cholesky. I know the matrix will always be square and positive definite although the det(A) maybe zero. In those rare cases I can just ignore it. But from a computation point of view and efficiency, is creating an inverse matrix too inefficient?

Richard Chambers
  • 16,643
  • 4
  • 81
  • 106
user1876942
  • 1,411
  • 2
  • 20
  • 32
  • 5
    This question might be better on http://math.stackexchange.com/ – Roger Rowland Oct 31 '13 at 09:17
  • 2
    Cholesky will only work if A is also symmetric--if so, go with cholesky for sure. – Drew Hall Oct 31 '13 at 09:19
  • But why go for Cholesky? A is always symmetric. I am not wondering about the maths but about the efficiency of a computer to do one better than the other. – user1876942 Oct 31 '13 at 09:23
  • Full matrix inversion is more prone to roundoff error. A better solution, especially if you have multiple RHS vectors, is LU decomposition. – duffymo Oct 31 '13 at 12:37
  • Just a correction, but there are plenty more ways to solve Ax=b than just the above 2 methods mentioned. See my answer below. – DashControl Nov 01 '13 at 13:17
  • If it is positive definite, then det(A) can never be zero. Maybe you mean mean positive semidefinite – texasflood Jul 15 '16 at 14:36

2 Answers2

7

In general, you always want to use a solver; the actual solver should run about as fast as multiplying by an inverse. Not only is computing an inverse matrix inefficient compared to doing a decomposition, using an inverse matrix has precision problems that a decompose/solver approach avoids.

If you have a symmetric matrix, a Cholesky decomposition is a reasonable choice. The closely-related LDL decomposition has comparable precision, while also avoiding the need for square roots.

If your matrix is not symmetric, you can't use Cholesky or LDL decompositions -- use the LU decomposition method instead.

comingstorm
  • 25,557
  • 3
  • 43
  • 67
7

For large matrices, yes the inverse is very inefficient. However, special properties, such as the matrix being lower triangular make the inverse much simpler to compute.

In numerical analysis, the most typical solution to Ax=b is LU factorization of A(LUx=b), then solving Ly = b for y and Ux = y for x.

For a more stable approach, consider using QR decomposition, where Q has the special property of Q^T*Q = I, so Rx = Q^Tb where R is upper triangular(only one back solve as opposed to a forward and back solve with LU).

Other special properties, such as the matrix being symmetric(Cholesky) or banded(Gauss) make the certain solvers better than the other.

As always be aware of floating point errors in your calculations.

Might I add that iterative solvers are also a popular way to solve systems. Conguate gradient method is the most used and works well for sparse matrices. Jacobi and Gauss-Seidel are good for matrices that are diagonally dominate and sparse.

Community
  • 1
  • 1
DashControl
  • 280
  • 1
  • 11
  • 24
  • 1
    +1 for the QR comment. I tend to think of LU first, because it's more commonly used for solving finite element equations. Nice bit about one back solve. – duffymo Nov 01 '13 at 14:36