Using the following code, I get the impression that the insert into a numpy array is dependant from the array size.
Are there any numpy based workarounds for this performance limit (or also non numpy based)?
if True:
import numpy as np
import datetime
import timeit
myArray = np.empty((0, 2), dtype='object')
myString = "myArray = np.insert(myArray, myArray.shape[0], [[ds, runner]], axis=0)"
runner = 1
ds = datetime.datetime.utcfromtimestamp(runner)
% timeit myString
19.3 ns ± 0.715 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)
for runner in range(30_000):
ds = datetime.datetime.utcfromtimestamp(runner)
myArray = np.insert(myArray, myArray.shape[0], [[ds, runner]], axis=0)
print("len(myArray):", len(myArray))
% timeit myString
len(myArray): 30000
38.1 ns ± 1.1 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)