After attempting to use several of the excellent solutions here, I was running into trouble with data that contained NaN's. Both the uniform_filter
and np.cumsum()
solutions were causing Nan's to propagate through the output array instead of just being ignored.
My solution below essentially just swaps the windowed sum function in @Jaime's answer with a convolution, which is robust to NaN's.
def windowed_sum(arr: np.ndarray, radius: int) -> np.ndarray:
"""radius=1 means the pixel itself and the 8 surrounding pixels"""
kernel = np.ones((radius * 2 + 1, radius * 2 + 1), dtype=int)
return convolve(arr, kernel, mode="constant", cval=0.0)
def windowed_var(arr: np.ndarray, radius: int) -> np.ndarray:
"""Note: this returns smaller in size than the input array (by radius)"""
diameter = radius * 2 + 1
win_sum = windowed_sum(arr, radius)[radius:-radius, radius:-radius]
win_sum_2 = windowed_sum(arr * arr, radius)[radius:-radius, radius:-radius]
return (win_sum_2 - win_sum * win_sum / diameter / diameter) / diameter / diameter
def windowed_std(arr: np.ndarray, radius: int) -> np.ndarray:
output = np.full_like(arr, np.nan, dtype=np.float64)
var_arr = windowed_var(arr, radius)
std_arr = np.sqrt(var_arr)
output[radius:-radius, radius:-radius] = std_arr
return output
This performs a little slower than the uniform_filter
, but still MUCH faster than many other methods (stacking arrays, iteration, etc)
>>> data = np.random.random((1024, 1024))
>>> %timeit windowed_std(data, 4)
158 ms ± 695 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
Compared to the uniform_filter
which performs around 36 ms for the same size data
With some NaN's:
data = np.arange(100, dtype=np.float64).reshape(10, 10)
data[3:4, 3:4] = np.nan
windowed_std(data, 1)
array([[ nan, nan, nan, nan, nan, nan, nan, nan, nan, nan],
[ nan, 8.21, 8.21, 8.21, 8.21, 8.21, 8.21, 8.21, 8.21, nan],
[ nan, 8.21, nan, nan, nan, 8.21, 8.21, 8.21, 8.21, nan],
[ nan, 8.21, nan, nan, nan, 8.21, 8.21, 8.21, 8.21, nan],
[ nan, 8.21, nan, nan, nan, 8.21, 8.21, 8.21, 8.21, nan],
[ nan, 8.21, 8.21, 8.21, 8.21, 8.21, 8.21, 8.21, 8.21, nan],
[ nan, 8.21, 8.21, 8.21, 8.21, 8.21, 8.21, 8.21, 8.21, nan],
[ nan, 8.21, 8.21, 8.21, 8.21, 8.21, 8.21, 8.21, 8.21, nan],
[ nan, 8.21, 8.21, 8.21, 8.21, 8.21, 8.21, 8.21, 8.21, nan],
[ nan, nan, nan, nan, nan, nan, nan, nan, nan, nan]])