This is my current code.
It works fine when the number of elements in the second "for loop" is low (around 10k) and it takes only a few seconds, but when he number of elements in the second "for loop" is high (around 40k) the time it takes is around 60 seconds or more: why?
For example: sometimes when the 2nd for loop has 28k elements it takes less time to execute it than when it has 7k elements. I don't understand why the time isn't linearly dependent on the number of operations.
Also, as a general rule, the longer the code runs, the bigger the loop times become.
To recap, the execution times usually follow these rules:
- when operations < 10k , time < 5 seconds
- when 10k < operations < 40k , 10s < time < 30s (seems random)
- when operations > 40k , time ~ 60 seconds
.
from random import random
import numpy as np
import time
import gc
from collections import deque
import random
center_freq = 20000000
smpl_time = 0.03749995312*pow(10,-6)
mat_contents = []
for i in range(10):
mat_contents.append(np.ones((3600, random.randint(3000,30000))))
tempo = np.empty([0,0])
for i in range(3600):
tempo = np.append(tempo, center_freq*smpl_time*i)
ILO = np.cos(tempo)
check = 0
for element in mat_contents:
start_time = time.time()
Icolumn = np.empty([0,0])
Imatrix = deque([])
gc.disable()
for colonna in element.T:
Imatrix.append(np.multiply(ILO, colonna))
gc.enable()
varI = np.var(Imatrix)
tempImean = np.mean(Imatrix)
print('\nSize: ', len(element.T))
print("--- %s seconds ---" % (time.time() - start_time))
print("--- ", check, ' ---')
check += 1