Is there any standard library/numpy equivalent of the following function:
def augmented_assignment_sum(iterable, start=0):
for n in iterable:
start += n
return start
?
While sum(ITERABLE)
is very elegant, it uses +
operator instead of +=
, which in case of np.ndarray
objects may affect performance.
I have tested that my function may be as fast as sum()
(while its equivalent using +
is much slower). As it is a pure Python function, I guess its performance is still handicapped, thus I am looking for some alternative:
In [49]: ARRAYS = [np.random.random((1000000)) for _ in range(100)]
In [50]: def not_augmented_assignment_sum(iterable, start=0):
...: for n in iterable:
...: start = start + n
...: return start
...:
In [51]: %timeit not_augmented_assignment_sum(ARRAYS)
63.6 ms ± 8.88 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [52]: %timeit sum(ARRAYS)
31.2 ms ± 2.18 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [53]: %timeit augmented_assignment_sum(ARRAYS)
31.2 ms ± 4.73 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [54]: %timeit not_augmented_assignment_sum(ARRAYS)
62.5 ms ± 12.1 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [55]: %timeit sum(ARRAYS)
37 ms ± 9.51 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [56]: %timeit augmented_assignment_sum(ARRAYS)
27.7 ms ± 2.53 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
I have tried to use functools.reduce
combined with operator.iadd
, but its performace is similar:
In [79]: %timeit reduce(iadd, ARRAYS, 0)
33.4 ms ± 11.6 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [80]: %timeit reduce(iadd, ARRAYS, 0)
29.4 ms ± 2.31 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
I am also interested in memory efficiency, thus prefer augmented assignments as they not require creation of intermediate objects.