92

This is rather the inverse of What can you use Python generator functions for?: python generators, generator expressions, and the itertools module are some of my favorite features of python these days. They're especially useful when setting up chains of operations to perform on a big pile of data--I often use them when processing DSV files.

So when is it not a good time to use a generator, or a generator expression, or an itertools function?

  • When should I prefer zip() over itertools.izip(), or
  • range() over xrange(), or
  • [x for x in foo] over (x for x in foo)?

Obviously, we eventually need to "resolve" a generator into actual data, usually by creating a list or iterating over it with a non-generator loop. Sometimes we just need to know the length. This isn't what I'm asking.

We use generators so that we're not assigning new lists into memory for interim data. This especially makes sense for large datasets. Does it make sense for small datasets too? Is there a noticeable memory/cpu trade-off?

I'm especially interested if anyone has done some profiling on this, in light of the eye-opening discussion of list comprehension performance vs. map() and filter(). (alt link)

Community
  • 1
  • 1
David Eyk
  • 12,171
  • 11
  • 63
  • 103
  • 2
    I posed a [similar question here](http://stackoverflow.com/q/38064206/4013571) and did some analysis to find that *in my particular example* **lists are faster for iterables of length `<5`**. – Alexander McFarlane Jun 28 '16 at 02:28
  • Does this answer your question? [Generator Expressions vs. List Comprehension](https://stackoverflow.com/questions/47789/generator-expressions-vs-list-comprehension) – ggorlen Jul 12 '20 at 20:05
  • In 3.x, `zip` behaves lazily, and `itertools.izip` has been removed. Similarly with `range` and `xrange`. – Karl Knechtel Jul 04 '22 at 01:05

10 Answers10

68

Use a list instead of a generator when:

1) You need to access the data multiple times (i.e. cache the results instead of recomputing them):

for i in outer:           # used once, okay to be a generator or return a list
    for j in inner:       # used multiple times, reusing a list is better
         ...

2) You need random access (or any access other than forward sequential order):

for i in reversed(data): ...     # generators aren't reversible

s[i], s[j] = s[j], s[i]          # generators aren't indexable

3) You need to join strings (which requires two passes over the data):

s = ''.join(data)                # lists are faster than generators in this use case

4) You are using PyPy which sometimes can't optimize generator code as much as it can with normal function calls and list manipulations.

Raymond Hettinger
  • 216,523
  • 63
  • 388
  • 485
  • For #3, couldn't the two passes be avoided by using `ireduce` to replicate the join? – Platinum Azure Oct 29 '14 at 16:48
  • Thanks! I wasn't aware of the string joining behavior. Can you provide or link to an explanation of why it requires two passes? – David Eyk Oct 29 '14 at 18:47
  • 7
    @DavidEyk *str.join* makes one pass to add-up the lengths of all the string fragments so it knows much memory to allocate for the combined final result. The second pass copies the string fragments into in the new buffer to create a single new string. See https://hg.python.org/cpython/file/82fd95c2851b/Objects/stringlib/join.h#l54 – Raymond Hettinger Oct 30 '14 at 05:56
  • 2
    Interesting, I use very often generators to join srings. But, I wonder, how does it work if it needs two passes? for instance `''.join('%s' % i for i in xrange(10))` – bgusach Nov 03 '14 at 09:26
  • 6
    @ikaros45 If the input to *join* isn't a list, it has to do extra work to build a temporary list for the two passes. Roughly this ``data = data if isinstance(data, list) else list(data); n = sum(map(len, data)); buffer = bytearray(n); ... ```. – Raymond Hettinger Nov 03 '14 at 15:03
44

In general, don't use a generator when you need list operations, like len(), reversed(), and so on.

There may also be times when you don't want lazy evaluation (e.g. to do all the calculation up front so you can release a resource). In that case, a list expression might be better.

Ryan Ginstrom
  • 13,915
  • 5
  • 45
  • 60
  • 26
    Also, doing all the calculation up front ensures that if the calculation of the list elements throws an exception, it will be thrown at the point where the list is *created*, not in the loop that subsequently iterates through it. If you need to ensure error-free processing of the entire list before continuing, generators are no good. – Ryan C. Thompson Mar 06 '11 at 20:06
  • 4
    That's a good point. It's very frustrating to get halfway through processing a generator, only to have everything explode. It can potentially be dangerous. – David Eyk Nov 03 '11 at 17:03
29

Profile, Profile, Profile.

Profiling your code is the only way to know if what you're doing has any effect at all.

Most usages of xrange, generators, etc are over static size, small datasets. It's only when you get to large datasets that it really makes a difference. range() vs. xrange() is mostly just a matter of making the code look a tiny little bit more ugly, and not losing anything, and maybe gaining something.

Profile, Profile, Profile.

Jerub
  • 41,746
  • 15
  • 73
  • 90
  • 1
    Profile, indeed. One of these days, I'll try and do an empirical comparison. Until then, I was just hoping someone else already had. :) – David Eyk Oct 29 '08 at 16:01
  • 2
    Profile, Profile, Profile. I completely agree. Profile, Profile, Profile. – Jeppe Apr 07 '20 at 20:59
17

You should never favor zip over izip, range over xrange, or list comprehensions over generator comprehensions. In Python 3.0 range has xrange-like semantics and zip has izip-like semantics.

List comprehensions are actually clearer like list(frob(x) for x in foo) for those times you need an actual list.

Uli Köhler
  • 13,012
  • 16
  • 70
  • 120
Steven Huwig
  • 20,015
  • 9
  • 55
  • 79
  • 3
    @Steven I don't disagree, but I am wondering what the reasoning behind your answer is. Why should zip, range, and list comprehensions never be favoured over the corresponding "lazy" version?? – mhawke Oct 29 '08 at 06:05
  • because, as he said, the old behaviour of zip and range will go away soon. –  Oct 29 '08 at 10:01
  • @Steven: Good point. I'd forgotten about these changes in 3.0, which probably means that someone up there is convinced of their general superiority. Re: List comprehensions, they are often clearer (and faster than expanded `for` loops!), but one can easily write incomprehensible list comprehensions. – David Eyk Oct 29 '08 at 15:54
  • I meant that list(frob(x) for x in foo) is more descriptive than [frob(x) for x in foo] -- i.e. the [] list comprehension "sugar" is not helpful. – Steven Huwig Oct 29 '08 at 17:45
  • 9
    I see what you mean, but I find the `[]` form descriptive enough (and more concise, and less cluttered, generally). But this is just a matter of taste. – David Eyk Oct 30 '08 at 15:57
  • And it looks like this will be the official answer, mainly for the point about generators becoming the normal forms in 3.0. Nobody's brought up any serious detriments to the careful use of generators, even on short datasets, so I will continue to use them with abandon. – David Eyk Oct 30 '08 at 15:59
  • Please check my response with performance numbers below. List comprehensions can be significantly faster than generator expressions when using psyco. – Ryan Ginstrom Nov 01 '08 at 06:34
  • 4
    The list operations are faster for small data sizes, but *everything* is fast when the data size is small, so you should always prefer generators unless you have a specific reason to use lists (for such reasons, see Ryan Ginstrom's answer). – Ryan C. Thompson Mar 06 '11 at 20:12
  • This is a rather weak point, I can't think of a case where you can't get the lazy version by playing with `try/except NameError` or `ImportError`. – bgusach Nov 03 '14 at 09:29
  • Using `timeit` in Python 3.8 gives `[frob(x) for x in foo]` as 50% faster than `list(frob(x) for x in foo)`. The latter is not a list comprehension as the post states--it's a [generator expression](https://www.python.org/dev/peps/pep-0289/). – ggorlen Jul 12 '20 at 19:04
7

As you mention, "This especially makes sense for large datasets", I think this answers your question.

If your not hitting any walls, performance-wise, you can still stick to lists and standard functions. Then when you run into problems with performance make the switch.

As mentioned by @u0b34a0f6ae in the comments, however, using generators at the start can make it easier for you to scale to larger datasets.

monkut
  • 42,176
  • 24
  • 124
  • 155
6

Regarding performance: if using psyco, lists can be quite a bit faster than generators. In the example below, lists are almost 50% faster when using psyco.full()

import psyco
import time
import cStringIO

def time_func(func):
    """The amount of time it requires func to run"""
    start = time.clock()
    func()
    return time.clock() - start

def fizzbuzz(num):
    """That algorithm we all know and love"""
    if not num % 3 and not num % 5:
        return "%d fizz buzz" % num
    elif not num % 3:
        return "%d fizz" % num
    elif not num % 5:
        return "%d buzz" % num
    return None

def with_list(num):
    """Try getting fizzbuzz with a list comprehension and range"""
    out = cStringIO.StringIO()
    for fibby in [fizzbuzz(x) for x in range(1, num) if fizzbuzz(x)]:
        print >> out, fibby
    return out.getvalue()

def with_genx(num):
    """Try getting fizzbuzz with generator expression and xrange"""
    out = cStringIO.StringIO()
    for fibby in (fizzbuzz(x) for x in xrange(1, num) if fizzbuzz(x)):
        print >> out, fibby
    return out.getvalue()

def main():
    """
    Test speed of generator expressions versus list comprehensions,
    with and without psyco.
    """

    #our variables
    nums = [10000, 100000]
    funcs = [with_list, with_genx]

    #  try without psyco 1st
    print "without psyco"
    for num in nums:
        print "  number:", num
        for func in funcs:
            print func.__name__, time_func(lambda : func(num)), "seconds"
        print

    #  now with psyco
    print "with psyco"
    psyco.full()
    for num in nums:
        print "  number:", num
        for func in funcs:
            print func.__name__, time_func(lambda : func(num)), "seconds"
        print

if __name__ == "__main__":
    main()

Results:

without psyco
  number: 10000
with_list 0.0519102208309 seconds
with_genx 0.0535933367509 seconds

  number: 100000
with_list 0.542204280744 seconds
with_genx 0.557837353115 seconds

with psyco
  number: 10000
with_list 0.0286369007033 seconds
with_genx 0.0513424889137 seconds

  number: 100000
with_list 0.335414877839 seconds
with_genx 0.580363490491 seconds
user3666197
  • 1
  • 6
  • 50
  • 92
Ryan Ginstrom
  • 13,915
  • 5
  • 45
  • 60
  • 1
    That's because psyco doesn't speed up generators at all, so it's more of a shortcoming of psyco than of generators. Good answer, though. – Steven Huwig Nov 03 '08 at 20:10
  • 4
    Also, psyco is pretty much unmaintained now. All the developers are spending time on PyPy's JIT which does to the best of my knowledge optimise generators. – Noufal Ibrahim Dec 31 '09 at 16:55
3

You should prefer list comprehensions if you need to keep the values around for something else later and the size of your set is not too large.

For example: you are creating a list that you will loop over several times later in your program.

To some extent you can think of generators as a replacement for iteration (loops) vs. list comprehensions as a type of data structure initialization. If you want to keep the data structure then use list comprehensions.

minty
  • 22,235
  • 40
  • 89
  • 106
  • If you only need limited look-ahead / look-behind on the stream, then maybe `itertools.tee()` can help you. But generally, if you want more than one pass, or random access to some intermediate data, make a list/set/dict of it. – Beni Cherniavsky-Paskin Dec 31 '09 at 13:33
2

As far as performance is concerned, I can't think of any times that you would want to use a list over a generator.

Jason Baker
  • 192,085
  • 135
  • 376
  • 510
  • `all(True for _ in range(10 ** 8))` is slower than `all([True for _ in range(10 ** 8)])` in Python 3.8. I'd prefer a list over a generator here – ggorlen Jul 12 '20 at 03:14
2

I've never found a situation where generators would hinder what you're trying to do. There are, however, plenty of instances where using generators would not help you any more than not using them.

For example:

sorted(xrange(5))

Does not offer any improvement over:

sorted(range(5))
Jeremy Cantrell
  • 26,392
  • 13
  • 55
  • 78
  • 5
    Neither of those offers any improvement over `range(5)`, since the resulting list is already sorted. – dan04 Jan 06 '14 at 19:19
0

A generator builds and enumerable list of values. enumerables are useful when iterative process can use the values on demand. It takes time to build your generator, so if the list is millions of records in size, it may be more useful to use sql server to process the data in sql.

Golden Lion
  • 3,840
  • 2
  • 26
  • 35