How can assignment of a list of elements be sped up? For scalar data, the answer would be to pre-assign an array:
import numpy as np
maxTime=1000; A=np.zeros(maxTime)
for t in range(maxTime):
data=get_fancy_data()
A[t]=data
However, if your fancy_data
is a list that is a different size every timestep, then how can this be done efficiently?
python -m timeit -s "import numpy as np; N=10**4; r=np.random.random(N); A=np.zeros(N);" \
"for i in range(N): A[:i+1] = r[:i+1]"
# 100 loops, best of 3: 14.9 msec per loop
python -m timeit -s "import numpy as np; N=10**4; r=np.random.random(N); A=np.zeros(N);" \
"for i in range(N): A = r[:i+1]"
# 1000 loops, best of 3: 1.68 msec per loop
Not pre-allocating A=np.zeros(N)
in the second example doesn't significantly alter the time taken.
I am not truly sure why the second example is faster. I suspect that A[:i+1]
creates a copy of that part of A before assignment à la Python list slicing efficiency .
I have some code with a bottleneck in such an operation, but could not find a faster approach to this.
*
I note that this is related to another question because it concerns the meaning of A[0:2]
- this is modifying the original array A
, rather than creating a new array and discarding the old. However, this question concerns a way to modify an array A
in a way that is faster than repeatedly making new arrays A
.