In [40]: a=np.random.randint(1,100, (9,))
In [41]: a
Out[41]: array([ 6, 35, 69, 60, 63, 51, 72, 57, 22])
broadcastable indices work nicely:
In [42]: i,j=np.arange(3)[:,None], np.arange(3)
In [43]: i,j
Out[43]:
(array([[0],
[1],
[2]]),
array([0, 1, 2]))
In [44]: i+j
Out[44]:
array([[0, 1, 2],
[1, 2, 3],
[2, 3, 4]])
Applying that to the 1d array:
In [45]: a[i+j].reshape(3,3)
Out[45]:
array([[ 6, 35, 69],
[35, 69, 60],
[69, 60, 63]])
they could also be used for assignment
A[i,j] = a[i+j]
In [46]: _[i,j]
Out[46]:
array([[ 6, 35, 69],
[35, 69, 60],
[69, 60, 63]])
as_strided
works fine, but is harder to understand, and subject to error. Recent versions encourage us to use sliding_window_view
.
In [47]: np.lib.stride_tricks.as_strided(a, shape=(3,3), strides=(8,8))
Out[47]:
array([[ 6, 35, 69],
[35, 69, 60],
[69, 60, 63]])
As long as the strided array is just 'read', used as a view it is faster, but subsequent operations may require a copy.
In fact, for this small case, the view generation is not faster:
In [48]: timeit np.lib.stride_tricks.as_strided(a, shape=(3,3), strides=(8,8))
12.4 µs ± 43.7 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
In [49]: %%timeit
...: i,j=np.arange(3)[:,None], np.arange(3)
...: a[i+j].reshape(3,3)
...:
...:
8.9 µs ± 189 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
These also produce the required i,j
:
np.ix_(range(3),range(3))
np.array(list(itertools.product(range(3), range(3)))).T