I have a function f(x)
that takes as input a list x
of 100 random floats between 0 and 1. Different lists will result in different running times of f
.
I want to find out how long f
takes to run on average, over a large number of different random lists. What's the best way to do this? Should I use timeit
and if so is there a way I can do this without including the time it takes to generate each random list in each trial?
This is how I would do it without timeit
(pseudocode):
for i = 1 to 10000:
x = random list
start = current time
f(x)
end = current time
results.append(end - start)
return mean(results)