The numpy approach I show later in my answer is actually slower than just iterating over the array. You will get the best performance by simply using numba
to pre-compile your loopy function:
import numba
@numba.njit
def andrey_numba(array, limit):
sum_arr = 0
for i in range(len(array)):
sum_arr += array[i]
if sum_arr > limit:
sum_arr = 0
array[i] = 0
return array
Timeless's creative application of np.ufunc.accumulate
is about as fast as your loopy approach.

I can't think of a way to avoid loops completely, but you could loop until the cumsum
contains any elements greater than 10, and subtract the sum of the previous elements from the remaining array:
def pranav(array, limit):
cumsum_arr = array.cumsum()
over_limit = cumsum_arr > limit
iters = 0 # Just to keep track of how many iterations
while over_limit.any():
iters += 1
over_limit_index = over_limit.argmax() # https://stackoverflow.com/a/48360950/843953
array[over_limit_index] = 0
cumsum_arr[over_limit_index:] -= cumsum_arr[over_limit_index]
over_limit = cumsum_arr > limit
return array
Which leaves you with the desired array
, in fewer iters
(3
instead of 9
):
array([1, 2, 3, 4, 0, 6, 0, 8, 0])
However, it actually takes more time, since you end up doing a lot more calculations in each loop. IMO this is a good illustration that not everything is made more efficient by using numpy. There are probably ways to gain performance by tweaking my numpy code, but I decided it's not worth it since numba and native-python perform well enough.