I see many people suggesting time.time()
. While time.time()
is an accurate way of measuring the actual time of day, it is not guaranteed to give you millisecond precision! From the documentation:
Note that even though the time is always returned as a floating point number, not all systems provide time with a better precision than 1 second. While this function normally returns non-decreasing values, it can return a lower value than a previous call if the system clock has been set back between the two calls.
This is not the procedure you want when comparing two times! It can blow up in so many interesting ways without you being able to tell what happened. In fact, when comparing two times, you don't really need to know what time of day it is, only that the two values have the same starting point. For this, the time
library gives you another procedure: time.clock()
. The documentation says:
On Unix, return the current processor time as a floating point number expressed in seconds. The precision, and in fact the very definition of the meaning of “processor time”, depends on that of the C function of the same name, but in any case, this is the function to use for benchmarking Python or timing algorithms.
On Windows, this function returns wall-clock seconds elapsed since the first call to this function, as a floating point number, based on the Win32 function QueryPerformanceCounter(). The resolution is typically better than one microsecond.
Use time.clock()
.
Or if you just want to test how fast your code is running, you could make it convenient for yourself and use timeit.timeit()
which does all of the measuring for you and is the de facto standard way of measuring elapsed time in code execution.