I have a numpy array of 2D vectors, which I am trying to normalize as below. The array can have vectors with magnitude zero.
x = np.array([[0.0, 0.0], [1.0, 0.0]])
norms = np.array([np.linalg.norm(a) for a in x])
>>> x/norms
array([[ nan, 0.],
[ inf, 0.]])
>>> nonzero = norms > 0.0
>>> nonzero
array([False, True], dtype=bool)
Can I somehow use nonzero
to apply the division only to x[i]
such that nonzero[i]
is True
? (I can write a loop for this - just wondering if there's a numpy way of doing this)
Or is there a better way of normalizing the array of vectors, skipping all zero vectors in the process?