1

I am using Python's time to gauge the time frame of a Selenium process. My script is like this...

start_time = time.clock()
...
#ending with
final_time = '{0:.2f}'.format(time.clock()-start_time)

When ran on a windows OS I will get something like 55.22 but if ran on the Mac it will return something like .14 even though it was about the same time.

Any idea what is happening differently on the Mac? I am actually going to try on Ubuntu as well to see the differences.

Shane
  • 1,629
  • 3
  • 23
  • 50

2 Answers2

6

Per the documentation, time.clock is different between Unix (including Mac OS X) and Windows:

On Unix, return the current processor time as a floating point number expressed in seconds. The precision, and in fact the very definition of the meaning of “processor time”, depends on that of the C function of the same name, but in any case, this is the function to use for benchmarking Python or timing algorithms.

On Windows, this function returns wall-clock seconds elapsed since the first call to this function, as a floating point number, based on the Win32 function QueryPerformanceCounter(). The resolution is typically better than one microsecond.

If you want cross-platform consistency, consider time.time.

The difference between processor time and wall-clock time is explained in this article by Doug Hellmann - basically the processor clock is only advancing if your process is doing work.

jonrsharpe
  • 115,751
  • 26
  • 228
  • 437
  • Thank you, always a good idea to read the directions first, but that is not so fun ;-) – Shane Aug 14 '14 at 21:02
  • [use `timeit.default_timer()` that works across platforms and Python versions e.g., it uses `time.perf_counter()` when available](http://stackoverflow.com/questions/85451/python-time-clock-vs-time-time-accuracy#comment18341094_85536) – jfs Oct 28 '14 at 07:15
2

The timeit module in the standard library uses timeit.default_timer to measure wall time:

if sys.platform == "win32":
    # On Windows, the best timer is time.clock()
    default_timer = time.clock
else:
    # On most other platforms the best timer is time.time()
    default_timer = time.time

help(timeit) explains:

The difference in default timer function is because on Windows,
clock() has microsecond granularity but time()'s granularity is 1/60th
of a second; on Unix, clock() has 1/100th of a second granularity and
time() is much more precise.  On either platform, the default timer
functions measure wall clock time, not the CPU time.  This means that
other processes running on the same computer may interfere with the
timing.  The best thing to do when accurate timing is necessary is to
repeat the timing a few times and use the best time.  The -r option is
good for this; the default of 3 repetitions is probably enough in most
cases.  On Unix, you can use clock() to measure CPU time.

So for cross-platform consistency you could use

import timeit
clock = timeit.default_timer

start_time = clock()
...
final_time = clock()
unutbu
  • 842,883
  • 184
  • 1,785
  • 1,677
  • +1 for `timeit.default_timer()`. In Python 3.3+, `timeit.default_timer()` is `time.perf_counter()` on all platforms. – jfs Oct 28 '14 at 07:16