1

I'd like to apply a function (namely np.var, but a general method would be great) to every diagonal of an array. My arrays are square. I can do this with:

offset_list = np.arange(-1 * len(arr) + 1, len(arr))
diag_var_list = [np.var(np.diagonal(arr, k)) for k in offset_list]

If I want a subset of the diagonals I can change offset_list.

But using list comprehension seems inefficient since I'll be doing this on many large arrays. Is there a better way?

jml-happy
  • 47
  • 1
  • 2
  • The diagonals vary in length, right? – hpaulj May 31 '20 at 21:51
  • Yes. I thought about reshaping to a ragged array and filling the empty spots with the column means, but don't know how or if that's efficient. – jml-happy May 31 '20 at 22:04
  • For ragged to regular conversion, use https://stackoverflow.com/questions/38619143/, https://stackoverflow.com/questions/40569220/ – Divakar May 31 '20 at 22:09

2 Answers2

0

Sparse matrices used in linear algebra often have a diagonal structure (not all diagonals though). The scipy.sparse has two ways of specifying diagonals - as a list of arrays, each different in length, and as a 2d padded array.

https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.sparse.diags.html

https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.sparse.spdiags.html#scipy.sparse.spdiags

For numpy (dense) arrays, diagonals aren't that special. All the functions just handle one diagonal at a time (it can be offset). np.diag*

hpaulj
  • 221,503
  • 14
  • 230
  • 353
  • scipy.sparse.diags requires a sequence of matrices, one for each offset. I haven't timed it, somehow ensuring a shallow copy might be ok, but I'm thinking that passing the same array in many times would not be helpful. – jml-happy Jun 02 '20 at 07:32
0

You can use the following idea. If major, minor = A.strides then setting strides of A to major + minor, minor (this should be done carefully to avoid stepping outside of array boundaries) you get array with each column as diagonal. This way with something like A.sum(axis=0) you can calculate sum of diagonals. For means you can use the same but multiply to certain values like A.shape[0] / [1, 2, ... A.shape[0], ... 2, 1] to fix the change in lengths of diagonals. For variance you can use that variance = <(x - <x>)**2> = <x**2> - <x>**2.

import numpy as np
def rot45(A):
    """
    >>> A = np.triu(np.arange(25).reshape(5, 5), 1)
    >>> print(A)
    [[ 0  1  2  3  4]
     [ 0  0  7  8  9]
     [ 0  0  0 13 14]
     [ 0  0  0  0 19]
     [ 0  0  0  0  0]]
    >>> print(rot45(A))
    [[ 1  2  3  4]
     [ 7  8  9  0]
     [13 14  0  0]
     [19  0  0  0]]
    """
    major, minor = A.strides
    strides = major + minor, minor
    shape = A.shape[0] - 1, A.shape[1]
    return np.lib.stride_tricks.as_strided(A, shape, strides)[:, 1:]


def apply_diag(A, func):
    """
    >>> A = np.arange(25).reshape(5, 5)
    >>> print(A)
    [[ 0  1  2  3  4]
     [ 5  6  7  8  9]
     [10 11 12 13 14]
     [15 16 17 18 19]
     [20 21 22 23 24]]
    >>> offset_list = np.arange(-1 * len(A) + 1, len(A))
    >>> diag_var_list = [np.sum(np.diagonal(A, k)) for k in offset_list]
    >>> diag_var_list
    [20, 36, 48, 56, 60, 40, 24, 12, 4]
    >>> print(apply_diag(A, np.sum))
    [20, 36, 48, 56, 60, 40, 24, 12, 4]
    """
    U = np.triu(A, 1)
    U = rot45(U)
    D = np.tril(A, -1).T.copy()
    D = rot45(D)
    return func(D, axis=0)[::-1].tolist() + [func(np.diag(A))] + func(U, axis=0)[::-1].tolist()[::-1]


def using_numpy(A):
    """
    >>> A = np.arange(25).reshape(5, 5)
    >>> print(A)
    [[ 0  1  2  3  4]
     [ 5  6  7  8  9]
     [10 11 12 13 14]
     [15 16 17 18 19]
     [20 21 22 23 24]]
    >>> offset_list = np.arange(-1 * len(A) + 1, len(A))
    >>> diag_var_list = [np.var(np.diagonal(A, k)) for k in offset_list]
    >>> diag_var_list
    [0.0, 9.0, 24.0, 45.0, 72.0, 45.0, 24.0, 9.0, 0.0]
    >>> print(using_numpy(A))
    [ 0.  9. 24. 45. 72. 45. 24.  9.  0.]
    """
    multiply = (A.shape[0] - 1) / np.r_[1:A.shape[0], A.shape[0] - 1, A.shape[0] - 1:0:-1]
    return multiply * apply_diag(A ** 2, np.mean) - (multiply * apply_diag(A, np.mean))**2


def list_comp(A, func):
    """
    >>> A = np.arange(25).reshape(5, 5)
    >>> list_comp(A, np.sum)
    [20, 36, 48, 56, 60, 40, 24, 12, 4]
    """
    offset_list = np.arange(-1 * len(A) + 1, len(A))
    return [func(np.diagonal(A, k)) for k in offset_list]

It seems like there is 10 times speed up for (100, 100) size matrices but for bigger ones the speed difference drop lower.

In [9]: A = np.random.randn(100, 100)

In [10]: %timeit using_numpy(A)
761 µs ± 3.35 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

In [11]: %timeit list_comp(A, np.var)
9.57 ms ± 19.9 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

In [12]: A = np.random.randn(1000, 1000)

In [13]: %timeit using_numpy(A)
37.4 ms ± 125 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

In [14]: %timeit list_comp(A, np.var)
112 ms ± 927 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
V. Ayrat
  • 2,499
  • 9
  • 10