I have big arrays to multiply in large number of iterations also.
I am training a model with array long around 1500 and I will perform 3 multiplications for about 1000000 times which takes a long time almost week.
I found Dask I tried to compare it with the normal numpy way but I found numpy faster:
x = np.arange(2000)
start = time.time()
y = da.from_array(x, chunks=(100))
for i in range (0,100):
p = y.dot(y)
#print(p)
print( time.time() - start)
print('------------------------------')
start = time.time()
p = 0
for i in range (0,100):
p = np.dot(x,x)
print(time.time() - start)
0.08502793312072754
0.00015974044799804688
Am I using dask wrong or it is numpy that fast ?