I'm looking for the most efficient way to determine whether a large array
contains at least one nonzero value. At first glance np.any
seems like the
obvious tool for the job, but it seems unexpectedly slow over large arrays.
Consider this extreme case:
first = np.zeros(1E3,dtype=np.bool)
last = np.zeros(1E3,dtype=np.bool)
first[0] = True
last[-1] = True
# test 1
%timeit np.any(first)
>>> 100000 loops, best of 3: 6.36 us per loop
# test 2
%timeit np.any(last)
>>> 100000 loops, best of 3: 6.95 us per loop
At least np.any
seems to be doing something vaguely sensible here - if the
nonzero value is the first one in the array, there should be no need to consider
any others before returning True
, so I would expect test 1 to be slightly
quicker than test 2.
However, what happens when we make the arrays much larger?
first = np.zeros(1E9,dtype=np.bool)
last = np.zeros(1E9,dtype=np.bool)
first[0] = True
last[-1] = True
# test 3
%timeit np.any(first)
>>> 10 loops, best of 3: 21.6 ms per loop
# test 4
%timeit np.any(last)
>>> 1 loops, best of 3: 739 ms per loop
As expected, test 4 is quite a lot slower than test 3. However, in test 3
np.any
should still only have to check the value of a single element in
first
in order to know that it contains at least one nonzero value. Why,
then, is test 3 so much slower than test 1?
Edit 1:
I'm using a development version of Numpy (1.8.0.dev-e11cd9b), but I get the exact same timing results using Numpy 1.7.1. I'm running 64 bit Linux, Python 2.7.4. My system is basically idling (I'm running an IPython session, a browser and a text editor), and I'm definitely not hitting the swap. I also replicated the result on another machine running Numpy 1.7.1.
Edit 2:
Using Numpy 1.6.2 I get times of ~1.85us for both tests 1 and 3, so as jorgeca says there seems to have been some performance regression between Numpy 1.6.2 and 1.7.1 1.7.0 in this regard.
Edit 3:
Following J.F. Sebastian and jorgeca's lead I've done some more benchmarking using np.all
on an array of zeros, which ought to be equivalent to calling np.any
on an array where the first element is one.
Test script:
import timeit
import numpy as np
print 'Numpy v%s' %np.version.full_version
stmt = "np.all(x)"
for ii in xrange(10):
setup = "import numpy as np; x = np.zeros(%d,dtype=np.bool)" %(10**ii)
timer = timeit.Timer(stmt,setup)
n,r = 1,3
t = np.min(timer.repeat(r,n))
while t < 0.2:
n *= 10
t = np.min(timer.repeat(r,n))
t /= n
if t < 1E-3:
timestr = "%1.3f us" %(t*1E6)
elif t < 1:
timestr = "%1.3f ms" %(t*1E3)
else:
timestr = "%1.3f s" %t
print "Array size: 1E%i, %i loops, best of %i: %s/loop" %(ii,n,r,timestr)
Results:
Numpy v1.6.2
Array size: 1E0, 1000000 loops, best of 3: 1.738 us/loop
Array size: 1E1, 1000000 loops, best of 3: 1.845 us/loop
Array size: 1E2, 1000000 loops, best of 3: 1.862 us/loop
Array size: 1E3, 1000000 loops, best of 3: 1.858 us/loop
Array size: 1E4, 1000000 loops, best of 3: 1.864 us/loop
Array size: 1E5, 1000000 loops, best of 3: 1.882 us/loop
Array size: 1E6, 1000000 loops, best of 3: 1.866 us/loop
Array size: 1E7, 1000000 loops, best of 3: 1.853 us/loop
Array size: 1E8, 1000000 loops, best of 3: 1.860 us/loop
Array size: 1E9, 1000000 loops, best of 3: 1.854 us/loop
Numpy v1.7.0
Array size: 1E0, 100000 loops, best of 3: 5.881 us/loop
Array size: 1E1, 100000 loops, best of 3: 5.831 us/loop
Array size: 1E2, 100000 loops, best of 3: 5.924 us/loop
Array size: 1E3, 100000 loops, best of 3: 5.864 us/loop
Array size: 1E4, 100000 loops, best of 3: 5.997 us/loop
Array size: 1E5, 100000 loops, best of 3: 6.979 us/loop
Array size: 1E6, 100000 loops, best of 3: 17.196 us/loop
Array size: 1E7, 10000 loops, best of 3: 116.162 us/loop
Array size: 1E8, 1000 loops, best of 3: 1.112 ms/loop
Array size: 1E9, 100 loops, best of 3: 11.061 ms/loop
Numpy v1.7.1
Array size: 1E0, 100000 loops, best of 3: 6.216 us/loop
Array size: 1E1, 100000 loops, best of 3: 6.257 us/loop
Array size: 1E2, 100000 loops, best of 3: 6.318 us/loop
Array size: 1E3, 100000 loops, best of 3: 6.247 us/loop
Array size: 1E4, 100000 loops, best of 3: 6.492 us/loop
Array size: 1E5, 100000 loops, best of 3: 7.406 us/loop
Array size: 1E6, 100000 loops, best of 3: 17.426 us/loop
Array size: 1E7, 10000 loops, best of 3: 115.946 us/loop
Array size: 1E8, 1000 loops, best of 3: 1.102 ms/loop
Array size: 1E9, 100 loops, best of 3: 10.987 ms/loop
Numpy v1.8.0.dev-e11cd9b
Array size: 1E0, 100000 loops, best of 3: 6.357 us/loop
Array size: 1E1, 100000 loops, best of 3: 6.399 us/loop
Array size: 1E2, 100000 loops, best of 3: 6.425 us/loop
Array size: 1E3, 100000 loops, best of 3: 6.397 us/loop
Array size: 1E4, 100000 loops, best of 3: 6.596 us/loop
Array size: 1E5, 100000 loops, best of 3: 7.569 us/loop
Array size: 1E6, 100000 loops, best of 3: 17.445 us/loop
Array size: 1E7, 10000 loops, best of 3: 115.109 us/loop
Array size: 1E8, 1000 loops, best of 3: 1.094 ms/loop
Array size: 1E9, 100 loops, best of 3: 10.840 ms/loop
Edit 4:
Following seberg's comment I tried the same test with an np.float32
array instead of np.bool
. In this case, Numpy 1.6.2 also shows a slowdown as array sizes increase:
Numpy v1.6.2
Array size: 1E0, 100000 loops, best of 3: 3.503 us/loop
Array size: 1E1, 100000 loops, best of 3: 3.597 us/loop
Array size: 1E2, 100000 loops, best of 3: 3.742 us/loop
Array size: 1E3, 100000 loops, best of 3: 4.745 us/loop
Array size: 1E4, 100000 loops, best of 3: 14.533 us/loop
Array size: 1E5, 10000 loops, best of 3: 112.463 us/loop
Array size: 1E6, 1000 loops, best of 3: 1.101 ms/loop
Array size: 1E7, 100 loops, best of 3: 11.724 ms/loop
Array size: 1E8, 10 loops, best of 3: 116.924 ms/loop
Array size: 1E9, 1 loops, best of 3: 1.168 s/loop
Numpy v1.7.1
Array size: 1E0, 100000 loops, best of 3: 6.548 us/loop
Array size: 1E1, 100000 loops, best of 3: 6.546 us/loop
Array size: 1E2, 100000 loops, best of 3: 6.804 us/loop
Array size: 1E3, 100000 loops, best of 3: 7.784 us/loop
Array size: 1E4, 100000 loops, best of 3: 17.946 us/loop
Array size: 1E5, 10000 loops, best of 3: 117.235 us/loop
Array size: 1E6, 1000 loops, best of 3: 1.096 ms/loop
Array size: 1E7, 100 loops, best of 3: 12.328 ms/loop
Array size: 1E8, 10 loops, best of 3: 118.431 ms/loop
Array size: 1E9, 1 loops, best of 3: 1.172 s/loop
Why should this happen? As with the boolean case, np.all
should still only have to check the first element before returning, so times should still be constant w.r.t. array size.