1

I am testing the execution time of a python script. The script begins with

from time import clock
print clock()

and ends with

print clock()

My script executes much longer than the time difference I get with this method. I want to know why this happens? Is the processor time counted in a different way than I think?

Thanks for advice!

Sebastian Hoelzl
  • 353
  • 5
  • 18
  • What are the durations you measures, and what did the clock return ? (out of curiosity) – An intern has no name Jul 05 '17 at 10:11
  • 1
    You may want to have a look at https://stackoverflow.com/questions/85451/python-time-clock-vs-time-time-accuracy – bruno desthuilliers Jul 05 '17 at 10:12
  • 1
    `time.clock()` should be the time your program ran in the CPU, so as a simple example, let's say you're opening a file, the I/O latency won't be reported by `time.clock()`. Also this function is deprecated from 3.3, see https://docs.python.org/3/library/time.html#time.clock – Adonis Jul 05 '17 at 10:17
  • Thank you! I was the 'time.clock() is used for benchmarking' in the docs that confused me. – Sebastian Hoelzl Jul 05 '17 at 10:18
  • It is a simple sql select from a sqlit3 db it takes about 10 seconds on an old pentium 4 processcor machine and time.clock() says it were 1.3 seconds. – Sebastian Hoelzl Jul 05 '17 at 10:19

1 Answers1

1

If what you need is real execution time rather than CPU time, use time.time instead of time.clock; also, you might be interested in timeit module.

Błotosmętek
  • 12,717
  • 19
  • 29
  • 1
    For timing, the best to use is ``timeit.default_timer()``. This should map to whatever time function provides best time precision, plus when possible use a timer that isn't affected by changes in system time. That is, will use a monotonic timer. – Graham Dumpleton Jul 05 '17 at 10:37