0

I'm trying to time the execution of an external process that I'm calling from python. I wanted to time it and saw here that time.clock() was the way to go. However I was seeing very inconsistent results with time.time(). I set up a simple example using time.sleep to mock the external process:

def t1(): 
  t0 = time.clock()
  time.sleep(2.5)
  return time.clock() - t0

def test_t1():
  timings = []
  for i in range(100):
    timings.append(t1())
  print sum(timings)/len(timings)

t2/test_t2 are similarly defined but use time.time() instead of time.clock()

>>>test_t1()
5.884e-05
>>>test_t2()
2.49959212065

Why would it be the case that time.clock is so wrong here?

EDIT: I should mention that I'm running this test on MacOSX and the deployed code will be running in Ubuntu

Community
  • 1
  • 1
Bovard
  • 1,175
  • 1
  • 14
  • 22

2 Answers2

1

time.clock() returns computation time spent in the code; since sleep requires very little CPU it will not cost much time.clock time.

time.time() gets the actual real-life time difference, so will notice the sleep more readily.

Veedrac
  • 58,273
  • 15
  • 112
  • 169
0

time.clock() is probably accurate, but isn't measuring wall-clock time. On your box it's probably measuring CPU time. Since your test program spends almost all its time sleeping (time.sleep(2.5)), it's accumulating very little CPU time. time.time() is measuring wall-clock time, though. Read the docs for more ;-)

Tim Peters
  • 67,464
  • 13
  • 126
  • 132