We can use np.searchsorted
for performance boost, more so for the case when the lookup array has sorted unique values -
def intersect1d_searchsorted(A,B,assume_unique=False):
if assume_unique==0:
B_ar = np.unique(B)
else:
B_ar = B
idx = np.searchsorted(B_ar,A)
idx[idx==len(B_ar)] = 0
return A[B_ar[idx] == A]
That assume_unique
flag makes it work for both generic case and the special case of B
being unique and sorted.
Sample run -
In [89]: A = np.array([10,4,6,7,1,5,3,4,24,1,1,9,10,10,18])
...: B = np.array([1,4,5,6,7,8,9])
In [90]: intersect1d_searchsorted(A,B,assume_unique=True)
Out[90]: array([4, 6, 7, 1, 5, 4, 1, 1, 9])
Timings to compare against another vectorized np.in1d
based solution (listed in two other answers) on large arrays for both cases -
In [103]: A = np.random.randint(0,10000,(1000000))
In [104]: B = np.random.randint(0,10000,(1000000))
In [105]: %timeit A[np.in1d(A, B)]
...: %timeit A[np.in1d(A, B, assume_unique=False)]
...: %timeit intersect1d_searchsorted(A,B,assume_unique=False)
1 loop, best of 3: 197 ms per loop
10 loops, best of 3: 190 ms per loop
10 loops, best of 3: 151 ms per loop
In [106]: B = np.unique(np.random.randint(0,10000,(5000)))
In [107]: %timeit A[np.in1d(A, B)]
...: %timeit A[np.in1d(A, B, assume_unique=True)]
...: %timeit intersect1d_searchsorted(A,B,assume_unique=True)
10 loops, best of 3: 130 ms per loop
1 loop, best of 3: 218 ms per loop
10 loops, best of 3: 80.2 ms per loop