I am looking for an efficient way (no for loops) to compute the euclidean distance between a set of samples and a set of clusters centroids.
Example:
import numpy as np
X = np.array([[1,2,3],[1, 1, 1],[0, 2, 0]])
y = np.array([[1,2,3], [0, 1, 0]])
Expected output:
array([[ 0., 11.],
[ 5., 2.],
[10., 1.]])
This is the squared euclidean distance between each sample in X to each centroid in y.
I came up with 2 solutions:
Solution 1 :
def dist_2(X,y):
X_square_sum = np.sum(np.square(X), axis = 1)
y_square_sum = np.sum(np.square(y), axis = 1)
dot_xy = np.dot(X, y.T)
X_square_sum_tile = np.tile(X_square_sum.reshape(-1, 1), (1, y.shape[0]))
y_square_sum_tile = np.tile(y_square_sum.reshape(1, -1), (X.shape[0], 1))
dist = X_square_sum_tile + y_square_sum_tile - (2 * dot_xy)
return dist
dist = dist_2(X, y)
solution 2:
import scipy
dist = scipy.spatial.distance.cdist(X,y)**2
Performance (wall-clock time) of the the two solution
import time
X = np.random.random((100000, 50))
y = np.random.random((100, 50))
start = time.time()
dist = scipy.spatial.distance.cdist(X,y)**2
end = time.time()
print (end - start)
Average elapsed wall-clock time = 0.7 sec
start = time.time()
dist = dist_2(X,y)
end = time.time()
print (end - start)
Average elapsed wall-clock time = 0.3 sec
Test on a large number of centroids
X = np.random.random((100000, 50))
y = np.random.random((1000, 50))
Average elapsed wall-clock time of "solution 1" = 50 sec (+ Memory issue)
Average elapsed wall-clock time of "solution 2" = 6 sec !!!
Conclusion
It seems that "solution 1 is more efficient than "solution 2" with respect to the average elapsed wall-clock time (on small data-sets) but inefficient with respect to memory.
Any suggestions?