4

Let A be an (N,M,M) matrix (with N very large) and I would like to compute scipy.linalg.expm(A[n,:,:]) for each n in range(N). I can of course just use a for loop but I was wondering if there was some trick to do this in a better way (something like np.einsum).

I have the same question for other operations like inverting matrices (inverting solved in comments).

HolyMonk
  • 432
  • 6
  • 17
  • For matrix inversion, see: [Is there a way to efficiently invert an array of matrices with numpy?](https://stackoverflow.com/questions/11972102/is-there-a-way-to-efficiently-invert-an-array-of-matrices-with-numpy) – jdehesa Mar 28 '18 at 09:23
  • Thank you for the reference! – HolyMonk Mar 28 '18 at 09:27
  • Unless the `expm` function is rewritten as a gufunc, there is not much to do. You can only gain optimization benefits from increasing the loop speed which is pretty negligible if compared to taking the exponential. – percusse Mar 29 '18 at 22:15

1 Answers1

5

Depending on the size and structure of your matrices you can do better than loop.

Assuming your matrices can be diagonalized as A = V D V^(-1) (where D has the eigenvalues in its diagonal and V contains the corresponding eigenvectors as columns), you can compute the matrix exponential as

exp(A) = V exp(D) V^(-1)

where exp(D) simply contains exp(lambda) for each eigenvalue lambda in its diagonal. This is really easy to prove if we use the power series definition of the exponential function. If the matrix A is furthermore normal, the matrix V is unitary and thus its inverse can be computed by simply taking its adjoint.

The good news is that numpy.linalg.eig and numpy.linalg.inv both work with stacked matrices just fine:

import numpy as np
import scipy.linalg

A = np.random.rand(1000,10,10)

def loopy_expm(A):
    expmA = np.zeros_like(A)
    for n in range(A.shape[0]):
        expmA[n,...] = scipy.linalg.expm(A[n,...])
    return expmA

def eigy_expm(A):
    vals,vects = np.linalg.eig(A)
    return np.einsum('...ik, ...k, ...kj -> ...ij',
                     vects,np.exp(vals),np.linalg.inv(vects))

Note that there's probably some room for optimization in specifying the order of operations in the call to einsum, but I didn't investigate that.

Testing the above for the random array:

In [59]: np.allclose(loopy_expm(A),eigy_expm(A))
Out[59]: True

In [60]: %timeit loopy_expm(A)
824 ms ± 55.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

In [61]: %timeit eigy_expm(A)
138 ms ± 992 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

That's already nice. If you're lucky enough that your matrices are all normal (say, because they are real symmetric):

A = np.random.rand(1000,10,10)
A = (A + A.transpose(0,2,1))/2

def eigy_expm_normal(A):
    vals,vects = np.linalg.eig(A)
    return np.einsum('...ik, ...k, ...jk -> ...ij',
                     vects,np.exp(vals),vects.conj())

Note the symmetric definition of the input matrix and the transpose inside the pattern of einsum. Results:

In [80]: np.allclose(loopy_expm(A),eigy_expm_normal(A))
Out[80]: True

In [79]: %timeit loopy_expm(A)
878 ms ± 89.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

In [80]: %timeit eigy_expm_normal(A)
55.8 ms ± 868 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

That is a 15-fold speedup for the above example shapes.


It should be noted though that scipy.linalg.eigm uses Padé approximation according to the documentation. This might imply that if your matrices are ill-conditioned, the eigenvalue decomposition may yield different results than scipy.linalg.eigm. I'm not familiar with how this function works, but I expect it to be safer for pathological inputs.

  • Apparently the above calls to `einsum` throw an error on certain versions of numpy (including the current 1.14.5) [due to a bug](https://github.com/numpy/numpy/issues/10794). This should be soon fixed in 1.15, in the meantime the workaround is to remove the space after the `->` in the einsum index strings: `'...ik, ...k, ...jk ->...ij'`. – Andras Deak -- Слава Україні Jul 09 '18 at 10:39