I have a system of equations in the form of A*x = B
where [A]
is a tridiagonal coefficient matrix. Using the Numpy solver numpy.linalg.solve
I can solve the system of equations for x.
See example below of how I develop the tridiagonal [A]
martix. the {B}
vector, and solve for x
:
# Solve system of equations with a tridiagonal coefficient matrix
# uses numpy.linalg.solve
# use Python 3 print function
from __future__ import print_function
from __future__ import division
# modules
import numpy as np
import time
ti = time.clock()
#---- Build [A] array and {B} column vector
m = 1000 # size of array, make this 8000 to see time benefits
A = np.zeros((m, m)) # pre-allocate [A] array
B = np.zeros((m, 1)) # pre-allocate {B} column vector
A[0, 0] = 1
A[0, 1] = 2
B[0, 0] = 1
for i in range(1, m-1):
A[i, i-1] = 7 # node-1
A[i, i] = 8 # node
A[i, i+1] = 9 # node+1
B[i, 0] = 2
A[m-1, m-2] = 3
A[m-1, m-1] = 4
B[m-1, 0] = 3
print('A \n', A)
print('B \n', B)
#---- Solve using numpy.linalg.solve
x = np.linalg.solve(A, B) # solve A*x = B for x
print('x \n', x)
#---- Elapsed time for each approach
print('NUMPY time', time.clock()-ti, 'seconds')
So my question relates to two sections of the above example:
- Since I am dealing with a tridiagonal matrix for
[A]
, also called a banded matrix, is there a more efficient way to solve the system of equations instead of usingnumpy.linalg.solve
? - Also, is there a better way to create the tridiagonal matrix instead of using a
for-loop
?
The above example runs on Linux in about 0.08 seconds
according to the time.clock()
function.
The numpy.linalg.solve
function works fine, but I'm trying to find an approach that takes advantage of the tridiagonal form of [A]
in hopes of speeding up the solution even further and then apply that approach to a more complicated example.