I am trying to figure out how long does it take for my LDA classifier to predict the class of a single 1080 dimensional vector. I read these threads:
- Measure time elapsed in Python?
- timeit versus timing decorator
- How do I get time of a Python program's execution?
and found out there are several ways to do this. I tested few of them but they produced very different results.
import time
start = time.time()
lda.predict(sample)
end = time.time()
print(str((end-start)*10**6), 'µs')
>>> 1452.9228210449219 µs
timeit module's default_timer:
from timeit import default_timer as timer
start = timer()
lda.predict(sample)
end = timer()
print(str((end-start)*10**6), 'µs')
>>> 979.6129997994285 µs
iPython %timeit magic function:
%timeit lda.predict(sample)
>>> 52 µs ± 873 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Am I doing something ridiculous here or is there some other explanation for the differences? Which one to trust? Thank you in advance.