3

General question: is there a preferably style to building up a list, in terms of efficiency, assuming that you have to do it within a loop? For example, is one of these options preferable to building a list of integers:

mylist = []

for x, y in mystuff:
  # x, y are strings that need to be
  # added sequentially to list
  mylist.extend([int(x), int(y)])

versus

for x, y in mystuff:
  mylist.append(int(x))
  mylist.append(int(y))

Or any others? Open to uses of scipy/numpy for this if relevant. thanks.

3 Answers3

11

If you need to micro-optimize like this, the only way to know what's fastest is to test.

The short version is: append is faster than extend, and Joran Beasley's suggestion itertools.chain.from_iterable is slightly faster than either—but only if you replace the map with a list comprehension.

So:

import itertools
import timeit

def makestuff(count):
    for i in range(count):
        yield (i, i)

def f_extend(mystuff):
    mylist = []
    for x, y in mystuff:
        mylist.extend([int(x), int(y)])
    return mylist

def f_append(mystuff):
    mylist = []
    for x, y in mystuff:
        mylist.append(int(x))
        mylist.append(int(y))
    return mylist

def f_chainmap(mystuff):
    return list(map(int, itertools.chain(*mystuff)))

def f_chaincomp(mystuff):
    return [int(x) for x in itertools.chain(*mystuff)]

def f_chainfrommap(mystuff):
    return list(map(int, itertools.chain.from_iterable(mystuff)))

def f_chainfromcomp(mystuff):
    return [int(x) for x in itertools.chain.from_iterable(mystuff)]

def f_reducecompcomp(mystuff):
    return [int(x) for x in reduce(operator.iadd, (list(y) for y in mystuff), [])]

def f_reducecompmap(mystuff):
    return [int(x) for x in reduce(operator.iadd, map(list, mystuff), [])]


try:
    import numpy
    def f_numpy(mystuff):
        return numpy.array(mystuff).flatten().tolist()
    def f_numpy2(mystuff):
        return numpy.array(list(mystuff)).flatten().tolist()
except:
    pass

if __name__ == '__main__':
  import sys
  main = sys.modules['__main__']
  count = int(sys.argv[1]) if len(sys.argv) > 1 else 10000
  for f in dir(main):
    if f.startswith('f_'):
      func = getattr(main, f)
      mystuff = makestuff(count)
      testfunc = lambda: func(mystuff)
      print('{}: {}'.format(f, timeit.timeit(testfunc, number=count)))

For Python 2, I tried the map versions without the extra list, and it was slightly faster, but still not nearly competitive. For Python 3, of course, the list is necessary.

Here are my timings:

$ python testlister.py 1000000
f_append: 1.34638285637
f_chaincomp: 2.12710499763
f_chainfromcomp: 1.20806899071
f_chainfrommap: 2.77231812477
f_chainmap: 3.67478609085
f_extend: 1.38338398933
f_numpy: 5.52979397774
f_numpy2: 7.5826470852
f_reducecompcomp: 2.17834687233
f_reducecompmap: 3.16517782211

$ python3 ./testlister.py 1000000
f_append: 0.9949617639649659
f_chaincomp: 2.0521950440015644
f_chainfromcomp: 0.9724521590862423
f_chainfrommap: 2.5558998831082135
f_chainmap: 3.5766013460233808
f_extend: 1.149905970087275
f_reducecompcomp: 2.2112889911513776
f_reducecompmap: 1.9317334480583668

My python is Apple's stock Python 2.7.2, while python3 is the python.org 3.3.0, both 64-bit, both on OS X 10.8.2, on a mid-2012 MacBook Pro with a 2.2GHz i7 and 4GB.

If you're using 32-bit Python on a POSIX platform, I've noticed in the past that somewhere in the not-too-distant past, iterators got an optimization that seems to have sped up many things in itertools in 64-bit builds, but slowed them down in 32-bit. So, you may find that append wins in that case. (As always, test on the platform(s) you actually care about optimizing.)

Ashwini Chaudhary linked to Flattening a shallow list in Python, which further linked to finding elements in python association lists efficiently. I suspect part of the difference between my results and theirs was improvements in iterators between 2.6.0 and 2.7.2/3.3.0, but the fact that we're explicitly using 2-element elements instead of larger ones is probably even more importantly.

Also, at least one of the answers claimed that reduce was the fastest. The reduce implementations in the original post are all terribly slow, but I was able to come up with faster versions. They still aren't competitive with append or chain.from_iterable, but they're in the right ballpark.

The f_numpy function is heltonbiker's implementation. Since mystuff is a 2D iterator, this actually just generates a 0D array wrapping the iterator, so all numpy can do is add overhead. I was able to come up with an implementation that generates a 1D array of iterators, but that was even slower, because now all numpy can do is add overhead N times as often. The only way I could get a 2D array of integers was by calling list first, as in f_numpy2, which made things even slower. (To be fair, throwing an extra list into the other functions slowed them down too, but not nearly as bad as with numpy.)

However, it's quite possible that I'm blanking here, and there is a reasonable way to use numpy here. Of course if you can be sure either the top level mystuff or each element in mystuff is a list or a tuple, you can write something better—and if you can redesign your app so you have a 2D numpy.array in the first place, instead of a general sequence of sequences, that'll be a whole different story. But if you just have a general 2D iteration of sequences, it doesn't seem very good for this use case.

Community
  • 1
  • 1
abarnert
  • 354,177
  • 51
  • 601
  • 671
  • @JoranBeasley: I was curious about the `chain` vs. `chain.from_iterable` difference. I definitely didn't expect it to be that dramatic! I'm also surprised that `list(map())` vs. comprehensions are so bad in 3.x. – abarnert Dec 22 '12 at 01:16
  • I added an answer including Numpy, perhaps if you're interested that one can be tested too. – heltonbiker Dec 22 '12 at 01:32
2
>>> my_list = [[1,2],[3,4]]
>>> flat_list_generator = itertools.chain.from_iterable(my_list)  #flatten (note : its a generator!)
>>> map(int,flat_list_generator ) #map to int type (since OP made them ints explicitly)
[1, 2, 3, 4]

I think is what you want

Joran Beasley
  • 110,522
  • 12
  • 160
  • 179
  • 1
    You probably want `chain.from_iterable(my_list)` instead of `chain(*my_list)`, because if either is faster, it's going to be the former. But otherwise, this is probably the best answer. – abarnert Dec 22 '12 at 00:55
  • thanks fixed ... this may actually still be slower as i think its O(n^2) where his original post was O(n) ... at least i think – Joran Beasley Dec 22 '12 at 00:57
  • @abarnert in terms of performance they are almost same. http://stackoverflow.com/questions/406121/flattening-a-shallow-list-in-python#408281. – Ashwini Chaudhary Dec 22 '12 at 01:04
  • @AshwiniChaudhary: Look at my test below. Unless there's something wrong with my code, they're not even close, at least in this case. Which surprised the hell out of me. Maybe that's one of the things that was optimized in 2.7.? and 3.?… (This is of course exactly why you test on your actual platform…) – abarnert Dec 22 '12 at 01:17
  • As it turns out, at least on my tests, this is not actually the best answer, because using `map` to apply the `int` more than doubles the time! Doing the same thing with a list comprehension is better (and also makes it work in 3.x.) – abarnert Dec 22 '12 at 01:19
  • yeah I think I mentioned that in my second comment :P – Joran Beasley Dec 22 '12 at 01:23
  • 1
    @JoranBeasley: But the `map` doesn't make it `O(n^2)`, just `O(2n)` = `O(n)`. And the comprehension does the same thing. It just apparently does it faster. – abarnert Dec 22 '12 at 01:26
2

If mystuff is a list of pairs, then with Numpy you can do:

result = numpy.array(mystuff, dtype=float).flatten()

or optionally make it a list:

result = numpy.array(mystuff, dtype=float).flatten().tolist()

From my qualitative experience, these array creation routines are quite fast.

heltonbiker
  • 26,657
  • 28
  • 137
  • 252
  • Hold on… I think your `numpy` is making a 1D array of objects, each of which is a two-element tuple/list/iterator, rather than a 2D array… or maybe even a 0D array with a single object in it? That would explain why it's slow. Let me see if I can fix it. – abarnert Dec 22 '12 at 01:39
  • OK, as written, it's making a 0D array with a single object. I've been able to get it to make a 1D array of iterators, but that still doesn't help. I haven't been able to get it to make a 2D array except by first calling `list()` on the iterator, at which point any benefit of `numpy` is clearly swamped by all of the listifications going on. (See my results.) – abarnert Dec 22 '12 at 01:46
  • @abarnert I added `dtype=float` for automatic conversion, but indeed, if `mystuff` is an iterator, you'd need to "flesh it out" first, thus making it potentially inefficient... – heltonbiker Dec 22 '12 at 02:04
  • 1
    Yeah, I added comments to the effect that `numpy` is a lot better when it's not dealing with iterators. I'm not sure if the problem is that it can't consume iterators as quickly as `list`/`deque`/etc., or that it's only efficient if it knows the size in advance… – abarnert Dec 27 '12 at 18:25