I'm trying to time the execution of an external process that I'm calling from python. I wanted to time it and saw here that time.clock() was the way to go. However I was seeing very inconsistent results with time.time(). I set up a simple example using time.sleep to mock the external process:
def t1():
t0 = time.clock()
time.sleep(2.5)
return time.clock() - t0
def test_t1():
timings = []
for i in range(100):
timings.append(t1())
print sum(timings)/len(timings)
t2/test_t2 are similarly defined but use time.time() instead of time.clock()
>>>test_t1()
5.884e-05
>>>test_t2()
2.49959212065
Why would it be the case that time.clock is so wrong here?
EDIT: I should mention that I'm running this test on MacOSX and the deployed code will be running in Ubuntu