As suggested in the comments, it works with signal convolve:
from scipy import signal
kernel = np.array([[1,1,1],[1,1,1],[1,1,1]])
grad = signal.convolve2d(data, kernel, 'same')
grad = grad/9
then you divide the matrix by the number of elements in the kernel matrix, for a 3x3 matrix you divide by 9. It works with smaller and larger matrixes.
More theory here, it helped my a lot to understand the convolute function: machinelearninguru.com
If you don't want to use scipy, it will work also with numpy only:
NumPy Example
If the means for the corners and edges need to reflect just the cell and its neighbors a divisor for the convolve2d
result can be constructed as:
corners = (np.array([0,0,-1,-1], dtype=np.int32),np.array([0,-1,0,-1], dtype=np.int32))
edges = np.ones(data.shape, dtype=np.bool)
edges[1:-1,1:-1] = False
edges[corners] = False
divisor = np.ones(data.shape) * 9
divisor[corners] = 4
divisor[edges] = 6
grad = signal.convolve2d(data, kernel, 'same')
grad = grad / divisor
For an initial array of data = np.arange(1, (5*3)+1).reshape((5, 3))
this results in:
In [35]: data
Out[35]:
array([[ 1, 2, 3],
[ 4, 5, 6],
[ 7, 8, 9],
[10, 11, 12],
[13, 14, 15]])
In [36]: divisor
Out[36]:
array([[ 4., 6., 4.],
[ 6., 9., 6.],
[ 6., 9., 6.],
[ 6., 9., 6.],
[ 4., 6., 4.]])
In [37]: grad
Out[37]:
array([[ 3. , 3.5, 4. ],
[ 4.5, 5. , 5.5],
[ 7.5, 8. , 8.5],
[ 10.5, 11. , 11.5],
[ 12. , 12.5, 13. ]])