17

I noticed a meaningful difference between iterating through a numpy array "directly" versus iterating through via the tolist method. See timing below:

directly
[i for i in np.arange(10000000)]
via tolist
[i for i in np.arange(10000000).tolist()]

enter image description here


considering I've discovered one way to go faster. I wanted to ask what else might make it go faster?

what is fastest way to iterate through a numpy array?

piRSquared
  • 285,575
  • 57
  • 475
  • 624
  • 1
    That *is* odd. I tried it myself several times and it seems that converting it to list does make it faster all the time. Thanks for bringing this to the light. – Ébe Isaac Nov 14 '16 at 16:31
  • 1
    Just iterate and get the list or do some processing too? Using just `list(np.arange(1000000))` looks quite fast. – Divakar Nov 14 '16 at 16:32
  • @Divakar see http://stackoverflow.com/a/40575522/2336654 – piRSquared Nov 14 '16 at 16:38
  • But then `np.arange(1000000).tolist()` gives the same thing as `list(np.arange(1000000))`, so maybe my earlier comment isn't quite the expected thing I guess. – Divakar Nov 14 '16 at 16:43
  • I rolled back my edit because that is very fast to get at the list. But I still have to iterate through it and do processing. – piRSquared Nov 14 '16 at 16:48
  • 1
    My question is why would you want to iterate over a numpy array instead of using vectorized functions. – Ignacio Vergara Kausel Nov 14 '16 at 16:54
  • 2
    `list()` produces a list of `np.int32` objects; `tolist` produces a list of `int`. They are not the same. – hpaulj Nov 14 '16 at 16:58
  • @IgnacioVergaraKausel because I can't figure out a fast vectorized O(n) method. I'll post a question about it later today. – piRSquared Nov 14 '16 at 17:17
  • What's the goal this iteration? Just generating a list of integers? `tolist` is the fastest way. Applying some scalar function to each element of the array? – hpaulj Nov 14 '16 at 17:27
  • @hpaulj I'm trying to calculate the cumulative count by unique value. See my answer to another question here http://stackoverflow.com/a/40575522/2336654. In this problem, I iterate through the array and track how many times I've seen an item, returning the count with every iteration. I've been trying to vectorize this. In fact, one of my answers in that link is an O(n^2) vectorized solution. I've explored the different return options of `np.unique` and haven't come up with a satisfactory answer. All this is information I'd include in another question. This was a by product of that one. – piRSquared Nov 14 '16 at 17:34
  • If you are doing something complicated at each step, the outer iteration mechanism doesn't make much difference in the time. Regardless of how you iterate you are repeating that costly step N-thousand times. – hpaulj Nov 14 '16 at 17:45
  • @hpaulj it's not at all complicated. – piRSquared Nov 14 '16 at 17:47

4 Answers4

12

This is actually not surprising. Let's examine the methods one a time starting with the slowest.

[i for i in np.arange(10000000)]

This method asks python to reach into the numpy array (stored in the C memory scope), one element at a time, allocate a Python object in memory, and create a pointer to that object in the list. Each time you pipe between the numpy array stored in the C backend and pull it into pure python, there is an overhead cost. This method adds in that cost 10,000,000 times.

Next:

[i for i in np.arange(10000000).tolist()]

In this case, using .tolist() makes a single call to the numpy C backend and allocates all of the elements in one shot to a list. You then are using python to iterate over that list.

Finally:

list(np.arange(10000000))

This basically does the same thing as above, but it creates a list of numpy's native type objects (e.g. np.int64). Using list(np.arange(10000000)) and np.arange(10000000).tolist() should be about the same time.


So, in terms of iteration, the primary advantage of using numpy is that you don't need to iterate. Operation are applied in an vectorized fashion over the array. Iteration just slows it down. If you find yourself iterating over array elements, you should look into finding a way to restructure the algorithm you are attempting, in such a way that is uses only numpy operations (it has soooo many built-in!) or if really necessary you can use np.apply_along_axis, np.apply_over_axis, or np.vectorize.

James
  • 32,991
  • 4
  • 47
  • 70
  • 3
    But there is a subtle difference between `list(np.arange(10))` and `np.arange(10).tolist()`: the first will result in a list of `np.int64` the second in a list of python `int`s. The first can be problematic for doing stuff like serialisation, e.g. using json. json will error on the first because it cannot handle `np.int64` – MaxNoe Nov 14 '16 at 16:57
  • This is very useful and is why I've upvoted it and I hope others do to. I'm leaving the question open for now as I'm still left wanting to see other options of iteration through the array. – piRSquared Nov 14 '16 at 17:36
8

These are my timings on a slower machine

In [1034]: timeit [i for i in np.arange(10000000)]
1 loop, best of 3: 2.16 s per loop

If I generate the range directly (Py3 so this is a genertor) times are much better. Take this a baseline for a list comprehension of this size.

In [1035]: timeit [i for i in range(10000000)]
1 loop, best of 3: 1.26 s per loop

tolist converts the arange to a list first; takes a bit longer, but the iteration is still on a list

In [1036]: timeit [i for i in np.arange(10000000).tolist()]
1 loop, best of 3: 1.6 s per loop

Using list() - same time as direct iteration on the array; that suggests that the direct iteration first does this.

In [1037]: timeit [i for i in list(np.arange(10000000))]
1 loop, best of 3: 2.18 s per loop

In [1038]: timeit np.arange(10000000).tolist()
1 loop, best of 3: 927 ms per loop

same times a iterating on the .tolist

In [1039]: timeit list(np.arange(10000000))
1 loop, best of 3: 1.55 s per loop

In general if you must loop, working on a list is faster. Access to elements of a list is simpler.

Look at the elements returned by indexing.

a[0] is another numpy object; it is constructed from the values in a, but not simply a fetched value

list(a)[0] is the same type; the list is just [a[0], a[1], a[2]]]

In [1043]: a = np.arange(3)
In [1044]: type(a[0])
Out[1044]: numpy.int32
In [1045]: ll=list(a)
In [1046]: type(ll[0])
Out[1046]: numpy.int32

but tolist converts the array into a pure list, in this case, as list of ints. It does more work than list(), but does it in compiled code.

In [1047]: ll=a.tolist()
In [1048]: type(ll[0])
Out[1048]: int

In general don't use list(anarray). It rarely does anything useful, and is not as powerful as tolist().

What's the fastest way to iterate through array - None. At least not in Python; in c code there are fast ways.

a.tolist() is the fastest, vectorized way of creating a list integers from an array. It iterates, but does so in compiled code.

But what is your real goal?

hpaulj
  • 221,503
  • 14
  • 230
  • 353
  • Thanks @hpaulj this comes very close to actually answering my question in that you stated... "What's the fastest way to iterate through array - None." I'll likely be selecting this as my answer, but I'm leaving it open for a bit. – piRSquared Nov 14 '16 at 17:38
0

The speedup via tolist only holds for 1D arrays. Once you add a second axis, the performance gain disappears:

1D

import numpy as np
import timeit

num_repeats = 10
x = np.arange(10000000)
                     
via_tolist = timeit.timeit("[i for i in x.tolist()]", number=num_repeats, globals={"x": x})
direct = timeit.timeit("[i for i in x]",number=num_repeats, globals={"x": x})

print(f"tolist: {via_tolist / num_repeats}")
print(f"direct: {direct / num_repeats}")
tolist: 0.430838281600154
direct: 0.49088368080047073

2D

import numpy as np
import timeit

num_repeats = 10
x = np.arange(10000000*10).reshape(-1, 10)
                     
via_tolist = timeit.timeit("[i for i in x.tolist()]", number=num_repeats, globals={"x": x})
direct = timeit.timeit("[i for i in x]", number=num_repeats, globals={"x": x})

print(f"tolist: {via_tolist / num_repeats}")
print(f"direct: {direct / num_repeats}")
tolist: 2.5606724178003786
direct: 1.2158976945000177
FirefoxMetzger
  • 2,880
  • 1
  • 18
  • 32
-1

My test case has an numpy array

[[  34  107]
 [ 963  144]
 [ 921 1187]
 [   0 1149]]

I'm going through this only once using range and enumerate

USING range

loopTimer1 = default_timer()
for l1 in range(0,4):
    print(box[l1])
print("Time taken by range: ",default_timer()-loopTimer1)

Result

[ 34 107]
[963 144]
[ 921 1187]
[   0 1149]
Time taken by range:  0.0005405639985838206

USING enumerate

loopTimer2 = default_timer()
for l2,v2 in enumerate(box):
    print(box[l2])
print("Time taken by enumerate: ", default_timer() - loopTimer2)

Result

[ 34 107]
[963 144]
[ 921 1187]
[   0 1149]
Time taken by enumerate:  0.00025605700102460105

This test case I picked enumerate will works faster

Santhosh
  • 1,554
  • 1
  • 13
  • 19
  • `v2` is unused, why would you need it? I think the difference you get is a flux, plus most of the time is spent on printing, not accessing the data. – minhle_r7 Oct 11 '21 at 15:04