Reproducing your code, but with a much smaller range so we can actually look at the lists and arrays:
In [2]: alist = [[num + 1 for num in range(5)] for lst in range(10)]
In [3]: alist
Out[3]:
[[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5]]
While we could make an array from that, np.array(alist)
, we can make one by combining a 5 element array with a "vertical" 10 element one:
In [4]: arr = np.arange(1,6)+np.zeros((10,1),int)
In [5]: arr
Out[5]:
array([[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5]])
Your count loop:
In [6]: count = 0
...: for lst in alist:
...: for num in lst:
...: count += 1
...:
In [7]: count
Out[7]: 50
And its time - here I use timeit
which repeats the run and gets a average time. In an ipython
session it's very easy to use:
In [8]: %%timeit
...: count = 0
...: for lst in alist:
...: for num in lst:
...: count += 1
...:
2.33 µs ± 0.833 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
Similar iteration on the 2d array - significantly slower:
In [9]: %%timeit
...: count = 0
...: for lst in arr:
...: for num in lst:
...: count += 1
...:
18.1 µs ± 144 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
nditer
is still slower than the list loop. Usually nditer
is slower than regular iteration. Here's it's relatively fast because is isn't doing anything with the num
variable. So this isn't a good test of its performance.
In [10]: %%timeit
...: count = 0
...: for num in numpy.nditer(arr):
...: count += 1
...:
7 µs ± 16.9 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
But if we use the array as intended, we get something much better (and ore so with a bigger arr
.
In [11]: np.count_nonzero(arr)
Out[11]: 50
In [12]: timeit np.count_nonzero(arr)
960 ns ± 2.24 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)
Another way - it's not so good with this small array, but I expect it will scale better than the list loop:
In [17]: timeit (arr>0).sum()
10.2 µs ± 32.5 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
In sum - numpy
can be faster, if used right. But don't try to imitate python list methods with it.