Approach #1
Use np.einsum
for the distance computations. To solve our case here, we could do -
def dist_matrix_vec(matrix, vec):
d = np.subtract(matrix,vec)
return np.sqrt(np.einsum('ij,ij->i',d,d))
Sample run -
In [251]: A = [[1,2,3],[2,3,4],[8,9,10]]
In [252]: B = np.array([1,1,1])
In [253]: dist_matrix_vec(A,B)
Out[253]: array([ 2.23606798, 3.74165739, 13.92838828])
Approach #2
When working with large data, we can use numexpr
module that supports multi-core processing if the intended operations could be expressed as arithmetic ones. To solve our case, we can express it like so -
import numexpr as ne
def dist_matrix_vec_numexpr(matrix, vec):
matrix = np.asarray(matrix)
vec = np.asarray(vec)
return np.sqrt(ne.evaluate('sum((matrix-vec)**2,1)'))
Timings on large arrays -
In [295]: np.random.seed(0)
...: A = np.random.randint(0,9,(10000,3))
...: B = np.random.randint(0,9,(3,))
In [296]: %timeit np.linalg.norm(A - B, axis = 1) #@Nathaniel's soln
...: %timeit dist_matrix_vec(A,B)
...: %timeit dist_matrix_vec_numexpr(A,B)
1000 loops, best of 3: 244 µs per loop
10000 loops, best of 3: 131 µs per loop
10000 loops, best of 3: 96.5 µs per loop
In [297]: np.random.seed(0)
...: A = np.random.randint(0,9,(100000,3))
...: B = np.random.randint(0,9,(3,))
In [298]: %timeit np.linalg.norm(A - B, axis = 1) #@Nathaniel's soln
...: %timeit dist_matrix_vec(A,B)
...: %timeit dist_matrix_vec_numexpr(A,B)
100 loops, best of 3: 5.31 ms per loop
1000 loops, best of 3: 1.43 ms per loop
1000 loops, best of 3: 918 µs per loop
The numexpr
based one was with 8
threads. Thus, with more number of threads available for compute, it should improve further. Related post
on how to control multi-core functionality.