scipy.sparse.linalg.spsolve
is clear enough,
but it seems that for speed you must
pip install
scikit-umfpack
or else
- build UMFPACK and AMD from
SuiteSparse
- then rebuild scipy from source, with
[umfpack] umfpack_libs = ...
in site.cfg .
otherwise scipy.sparse.linalg defaults to the slower SuperLU.
Is scikit.sparse the way to go?
Compared to what, with what criteria ?
If C / C++ is enough for you, use SuiteSparse directly.
Any tool depends on what you're comfortable with,
and on users: one, two, many.
Maybe better visualization would help your project more than faster spsolve.
Some pretty obvious pluses and minuses of scipy.sparse:
+ python for fast development, data input -- matrices -- visualize
+ several packages have built on scipy.sparse;
ask around in your application area (which is ?)
- rough edges (matrices are a pain), with afaik no wiki to collect hints and code snippets
- layers upon layers, scipy.sparse -- SuiteSparse -- ... BLAS ...
make timing and debugging tough.
Fwiw, solver times vary a lot on my iMac.
These are all with default args, without umfpack.
(This is NOT a realistic testcase; but
satisficing is often good enough.)
X = sparse.rand( m, n, dens, format="csr" )
A = 1e-6 * sparse.eye(m) + X * X.T
linalg solvers( A, b )
(5000, 5000) density 0.42 % --
51 msec spsolve
5 msec bicg
3 msec bicgstab
2 msec cg
4 msec cgs
3 msec gmres
4 msec lgmres
1 msec minres
6 msec qmr
5 msec lsmr
(5000, 5000) density 0.84 % --
428 msec spsolve
12 msec bicg
7 msec bicgstab
5 msec cg
10 msec cgs
6 msec gmres
8 msec lgmres
2 msec minres
13 msec qmr
12 msec lsmr
(5000, 5000) density 1.3 % --
1462 msec spsolve
16 msec bicg
9 msec bicgstab
7 msec cg
11 msec cgs
7 msec gmres
10 msec lgmres
1 msec minres
18 msec qmr
14 msec lsmr