I coded some program which updates a numpy
list in each iteration and does some operations on it. the number of iterations depends on time. for example in 1 second, there might be 1000 to 2500 iterations. It means that the items in the numpy list wouldn't be more than 2500 for running program for 1 second.
I had implemented a basic algorithm which I am not sure if it's the fastest way to calculate bonus
:
import numpy as np
cdef int[:, :] pl_list
cdef list pl_length
cdef list bonus
pl_list = np.array([[8, 7]], dtype=np.int32)
def modify(pl_list, pl_length):
cdef int k_const = 10
mean = np.mean(pl_list, axis=0)
mean = np.subtract(mean, pl_length)
dev = np.std(pl_list, axis=0)
mean[0] / dev[0] if dev[0] != 0 else 0
mean[1] / dev[1] if dev[1] != 0 else 0
bonus = -1 + (2 / (1 + np.exp(-k_const * mean)))
return list(bonus)
for i in range(2499): # I just simplified the loop. the main loop works like startTime - time.clock() < seconds
rand = np.random.randint(8, 64)
pl_length = [rand, rand-1]
pl_list = np.append(pl_list, [pl_length], axis=0)
bonus = modify(pl_list, pl_length)
I was thinking of speed up this program using these ideas:
- using
np.vstack
,np.stack
or maybenp.concatenate
instead ofnp.append(pl_list, [pl_length])
.(which one might be faster?) Using self-made functions to calculate the np.std, np.mean like this (because iterating in memoryviews are so fast in cython):
cdef int i,sm = 0
for i in range(pl_list.shape[0]):
sm += pl_list[i]
mean = sm/pl_list.shape[0]
I was also thinking of defining a static length(like 2500) for memoryviews, so I wouldn't need to use
np.append
and I can build a queue structure on that numpy list. (How about Queue library? Is that faster than numpy lists in such operations?)
Sorry if my questions are too many and complicated. I'm just trying to get the best possible performance in speed.