114

I'm trying to time some code. First I used a timing decorator:

#!/usr/bin/env python

import time
from itertools import izip
from random import shuffle

def timing_val(func):
    def wrapper(*arg, **kw):
        '''source: http://www.daniweb.com/code/snippet368.html'''
        t1 = time.time()
        res = func(*arg, **kw)
        t2 = time.time()
        return (t2 - t1), res, func.__name__
    return wrapper

@timing_val
def time_izip(alist, n):
    i = iter(alist)
    return [x for x in izip(*[i] * n)]

@timing_val
def time_indexing(alist, n):
    return [alist[i:i + n] for i in range(0, len(alist), n)]

func_list = [locals()[key] for key in locals().keys()
             if callable(locals()[key]) and key.startswith('time')]
shuffle(func_list)  # Shuffle, just in case the order matters

alist = range(1000000)
times = []
for f in func_list:
    times.append(f(alist, 31))

times.sort(key=lambda x: x[0])
for (time, result, func_name) in times:
    print '%s took %0.3fms.' % (func_name, time * 1000.)

yields

% test.py
time_indexing took 73.230ms.
time_izip took 122.057ms.

And here I use timeit:

%  python - m timeit - s '' 'alist=range(1000000);[alist[i:i+31] for i in range(0, len(alist), 31)]'
10 loops, best of 3:
    64 msec per loop
% python - m timeit - s 'from itertools import izip' 'alist=range(1000000);i=iter(alist);[x for x in izip(*[i]*31)]'
10 loops, best of 3:
    66.5 msec per loop

Using timeit the results are virtually the same, but using the timing decorator it appears time_indexing is faster than time_izip.

What accounts for this difference?

Should either method be believed?

If so, which?

unutbu
  • 842,883
  • 184
  • 1,785
  • 1,677
  • 2
    By the way, the `func_name` attribute is not reliably available for methods, only for functions, so `__name__` may be more preferable instead. Try it with `time.sleep`. – Asclepius Jun 12 '14 at 16:27

9 Answers9

156

Use wrapping from functools to improve Matt Alcock's answer.

from functools import wraps
from time import time

def timing(f):
    @wraps(f)
    def wrap(*args, **kw):
        ts = time()
        result = f(*args, **kw)
        te = time()
        print('func:%r args:[%r, %r] took: %2.4f sec' % \
          (f.__name__, args, kw, te-ts))
        return result
    return wrap

In an example:

@timing
def f(a):
    for _ in range(a):
        i = 0
    return -1

Invoking method f wrapped with @timing:

func:'f' args:[(100000000,), {}] took: 14.2240 sec
f(100000000)

The advantage of this is that it preserves attributes of the original function; that is, metadata like the function name and docstring is correctly preserved on the returned function.

Rune Kaagaard
  • 6,643
  • 2
  • 38
  • 29
Jonathan Prieto-Cubides
  • 2,577
  • 2
  • 18
  • 17
  • 5
    I don't understand what extra functionality wraps is adding here. Could you expand on this? – TheRealFakeNews Jan 30 '19 at 21:36
  • 17
    `wraps` copies metadata about the inner function to the outer function. Without it, the decorated function object will reference the wrapper rather than the inner function. This is only an issue when using tools that use introspection, such as debuggers. For instance, if we called `help` on a decorated function without `wraps`, the help would be on the decorator instead of the inner function. – Thang Feb 27 '20 at 17:06
  • However, in the case of measuring timings, there will not be really an issue, so I don't understand why this should be applicable in this case. The wrapper will not be part of any production code. – skjerns Oct 22 '20 at 20:57
  • 6
    @skjerns I don't think you can assume a timing decorator will not be part of any production code. Logging instrumentation data in production is definitely a thing. Since there's not really any downside to following this best practice, I think it's best to default to using `wraps`. – Dustin Wyatt Nov 27 '21 at 19:53
  • 8
    If you’re using Python 3.6 or above, you can replace the print with something along the lines of `print(f'Function {f.__name__} took {te-ts:2.4f} seconds')` – kotchwane Dec 01 '21 at 17:15
  • 1
    Perfect, this is exactly what I was looking for, thank you! – jacktrader Jun 06 '23 at 13:44
51

I would use a timing decorator, because you can use annotations to sprinkle the timing around your code rather than making you code messy with timing logic.

import time

def timeit(f):

    def timed(*args, **kw):

        ts = time.time()
        result = f(*args, **kw)
        te = time.time()

        print 'func:%r args:[%r, %r] took: %2.4f sec' % \
          (f.__name__, args, kw, te-ts)
        return result

    return timed

Using the decorator is easy either use annotations.

@timeit
def compute_magic(n):
     #function definition
     #....

Or re-alias the function you want to time.

compute_magic = timeit(compute_magic)
Asclepius
  • 57,944
  • 17
  • 167
  • 143
Matt Alcock
  • 12,399
  • 14
  • 45
  • 61
  • 12
    I believe using functools.wraps here would be a small improvement – kuzzooroo Aug 23 '14 at 01:40
  • 2
    Out of curiosity was this answer copied from here?: https://www.andreas-jung.com/contents/a-python-decorator-for-measuring-the-execution-time-of-methods – emschorsch Mar 22 '16 at 22:44
  • I think it is perfectly OK to reference a code in a different place on Stackoverlow. I don't see any issue with that @emschorsch – London guy Mar 17 '18 at 15:02
24

Use timeit. Running the test more than once gives me much better results.

func_list=[locals()[key] for key in locals().keys() 
           if callable(locals()[key]) and key.startswith('time')]

alist=range(1000000)
times=[]
for f in func_list:
    n = 10
    times.append( min(  t for t,_,_ in (f(alist,31) for i in range(n)))) 

for (time,func_name) in zip(times, func_list):
    print '%s took %0.3fms.' % (func_name, time*1000.)

->

<function wrapper at 0x01FCB5F0> took 39.000ms.
<function wrapper at 0x01FCB670> took 41.000ms.
Jochen Ritzel
  • 104,512
  • 31
  • 200
  • 194
8

Inspired by Micah Smith's answer, I made funcy print directly instead (and not use logging module).

Below is convenient for use at google colab.

# pip install funcy
from funcy import print_durations

@print_durations()
def myfunc(n=0):
  for i in range(n):
    pass

myfunc(123)
myfunc(123456789)

# 5.48 mks in myfunc(123)
# 3.37 s in myfunc(123456789)
newbee
  • 101
  • 1
  • 4
6

I got tired of from __main__ import foo, now use this -- for simple args, for which %r works, and not in Ipython.
(Why does timeit works only on strings, not thunks / closures i.e. timefunc( f, arbitrary args ) ?)


import timeit

def timef( funcname, *args, **kwargs ):
    """ timeit a func with args, e.g.
            for window in ( 3, 31, 63, 127, 255 ):
                timef( "filter", window, 0 )
    This doesn't work in ipython;
    see Martelli, "ipython plays weird tricks with __main__" in Stackoverflow        
    """
    argstr = ", ".join([ "%r" % a for a in args]) if args  else ""
    kwargstr = ", ".join([ "%s=%r" % (k,v) for k,v in kwargs.items()]) \
        if kwargs  else ""
    comma = ", " if (argstr and kwargstr)  else ""
    fargs = "%s(%s%s%s)" % (funcname, argstr, comma, kwargstr)
        # print "test timef:", fargs
    t = timeit.Timer( fargs, "from __main__ import %s" % funcname )
    ntime = 3
    print "%.0f usec %s" % (t.timeit( ntime ) * 1e6 / ntime, fargs)

#...............................................................................
if __name__ == "__main__":
    def f( *args, **kwargs ):
        pass

    try:
        from __main__ import f
    except:
        print "ipython plays weird tricks with __main__, timef won't work"
    timef( "f")
    timef( "f", 1 )
    timef( "f", """ a b """ )
    timef( "f", 1, 2 )
    timef( "f", x=3 )
    timef( "f", x=3 )
    timef( "f", 1, 2, x=3, y=4 )

Added: see also "ipython plays weird tricks with main", Martelli in running-doctests-through-ipython

Community
  • 1
  • 1
denis
  • 21,378
  • 10
  • 65
  • 88
  • Thank you! This certainly makes it easier to drop functions into timeit. You could omit `argstr`,`kwargstr`,`comma` if you use `fargs='%s(*%s,**%s)'%(funcname,args,kwargs)`, but perhaps it makes fargs a little harder to read. – unutbu Oct 28 '09 at 12:26
4

This is the type of need that you pray a library provides a portable solution -- DRY! Fortunately funcy.log_durations comes to the answer.

Example copied from documentation:

@log_durations(logging.info)
def do_hard_work(n):
    samples = range(n)
    # ...

# 121 ms in do_hard_work(10)
# 143 ms in do_hard_work(11)
# ...

Browse the funcy documentation for other variants such as different keyword arguments and @log_iter_durations.

Micah Smith
  • 4,203
  • 22
  • 28
2

Just a guess, but could the difference be the order of magnitude of difference in range() values?

From your original source:

alist=range(1000000)

From your timeit example:

alist=range(100000)

For what it's worth, here are the results on my system with the range set to 1 million:

$ python -V
Python 2.6.4rc2

$ python -m timeit -s 'from itertools import izip' 'alist=range(1000000);i=iter(alist);[x for x in izip(*[i]*31)]'
10 loops, best of 3: 69.6 msec per loop

$ python -m timeit -s '' 'alist=range(1000000);[alist[i:i+31] for i in range(0, len(alist), 31)]'
10 loops, best of 3: 67.6 msec per loop

I wasn't able to get your other code to run, since I could not import the "decorator" module on my system.


Update - I see the same discrepancy you do when I run your code without the decorator involved.

$ ./test.py
time_indexing took 84.846ms.
time_izip took 132.574ms.

Thanks for posting this question; I learned something today. =)

mpontillo
  • 13,559
  • 7
  • 62
  • 90
  • I've removed the decorator module so my code is easier to run. Would you give it a try? Do you see a distinct difference in speed when you run the script? Also, I changed the range from 10^5 --> 10^6, so the comparison is more equal. Thanks. – unutbu Oct 26 '09 at 12:44
  • Updated, for what it's worth, but it seems like you got your answer now. No prob. – mpontillo Oct 26 '09 at 15:53
1

regardless of this particular exercise, I'd imagine that using timeit is much safer and reliable option. it is also cross-platform, unlike your solution.

SilentGhost
  • 307,395
  • 66
  • 306
  • 293
0

Here's a decorator that works for async and sync functions and prints a nice human readable output, e.g. 8us, 200ms, etc...

import asyncio
import time
from typing import Callable, Any

def timed(fn: Callable[..., Any]) -> Callable[..., Any]:
    """
    Decorator log test start and end time of a function

    :param fn: Function to decorate
    :return: Decorated function

    Example:
    >>> @timed
    >>> def test_fn():
    >>>     time.sleep(1)
    >>> test_fn()

    """

    def wrapped_fn(*args: Any, **kwargs: Any) -> Any:
        start = time.time()
        print(f'Running {fn.__name__}...')
        ret = fn(*args, **kwargs)
        duration_str = get_duration_str(start)
        print(f'Finished {fn.__name__} in {duration_str}')
        return ret

    async def wrapped_fn_async(*args: Any, **kwargs: Any) -> Any:
        start = time.time()
        print(f'Running {fn.__name__}...')
        ret = await fn(*args, **kwargs)
        duration_str = get_duration_str(start)
        print(f'Finished {fn.__name__} in {duration_str}')
        return ret

    if asyncio.iscoroutinefunction(fn):
        return wrapped_fn_async
    else:
        return wrapped_fn
    
    
def get_duration_str(start: float) -> str:
    """Get human readable duration string from start time"""
    duration = time.time() - start
    if duration > 1:
        duration_str = f'{duration:,.3f}s'
    elif duration > 1e-3:
        duration_str = f'{round(duration * 1e3)}ms'
    elif duration > 1e-6:
        duration_str = f'{round(duration * 1e6)}us'
    else:
        duration_str = f'{duration * 1e9}ns'
    return duration_str

Here's a gist with the same.

crizCraig
  • 8,487
  • 6
  • 54
  • 53