-1

I start with an array a containing N unique values (product(a.shape) >= N).
I need to find the array b that has the index 0 .. N-1 from the (sorted) list of unique values in a at the positions of the respective elements in a.

As an example

import numpy as np
np.random.seed(42)
a = np.random.choice([0.1,1.3,7,9.4], size=(4,3))
print a

prints a as

[[ 7.   9.4  0.1]
 [ 7.   7.   9.4]
 [ 0.1  0.1  7. ]
 [ 1.3  7.   7. ]]

The unique values are [0.1, 1.3, 7.0, 9.4], so the required outcome b would be

[[2 3 0]
 [2 2 3]
 [0 0 2]
 [1 2 2]]

(e.g. the value at a[0,0] is 7.; 7. has the index 2; thus b[0,0] == 2.)

Since numpy does not have an index function, I could do this using a loop. Either looping over the input array, like this:

u = np.unique(a).tolist()
af = a.flatten()
b = np.empty(len(af), dtype=int)
for i in range(len(af)):
    b[i] = u.index(af[i])
b = b.reshape(a.shape)
print b

or looping over the unique values as follows:

u = np.unique(a)
b = np.empty(a.shape, dtype=int)
for i in range(len(u)):
    b[np.where(a == u[i])] = i
print b

I suppose that the second way of looping over the unique values is already more efficient than the first in cases where not all values in a are distinct; but still, it involves this loop and is rather inefficient compared to inplace operations.

So my question is: What is the most efficient way of obtaining the array b filled with the indizes of the unique values of a?

Community
  • 1
  • 1
ImportanceOfBeingErnest
  • 321,279
  • 53
  • 665
  • 712

1 Answers1

2

You could use np.unique with its optional argument return_inverse -

np.unique(a, return_inverse=1)[1].reshape(a.shape)

Sample run -

In [308]: a
Out[308]: 
array([[ 7. ,  9.4,  0.1],
       [ 7. ,  7. ,  9.4],
       [ 0.1,  0.1,  7. ],
       [ 1.3,  7. ,  7. ]])

In [309]: np.unique(a, return_inverse=1)[1].reshape(a.shape)
Out[309]: 
array([[2, 3, 0],
       [2, 2, 3],
       [0, 0, 2],
       [1, 2, 2]])

Going through the source code of np.unique that looks pretty efficient to me, but still pruning out the un-necessary parts, we would end up with another solution, like so -

def unique_return_inverse(a):
    ar = a.flatten()     
    perm = ar.argsort()
    aux = ar[perm]
    flag = np.concatenate(([True], aux[1:] != aux[:-1]))
    iflag = np.cumsum(flag) - 1
    inv_idx = np.empty(ar.shape, dtype=np.intp)
    inv_idx[perm] = iflag
    return inv_idx

Timings -

In [444]: a= np.random.randint(0,1000,(1000,400))

In [445]: np.allclose( np.unique(a, return_inverse=1)[1],unique_return_inverse(a))
Out[445]: True

In [446]: %timeit np.unique(a, return_inverse=1)[1]
10 loops, best of 3: 30.4 ms per loop

In [447]: %timeit unique_return_inverse(a)
10 loops, best of 3: 29.5 ms per loop

Not a great deal of improvement there over the built-in.

Divakar
  • 218,885
  • 19
  • 262
  • 358
  • Wow, this is already a very good candidate for being accepted. However it returns (creates) two arrays of the size of `a`, right? So there might be a more efficient solution in terms of memory?! – ImportanceOfBeingErnest Mar 13 '17 at 13:26
  • @ImportanceOfBeingErnest Added another one for a very very marginal improvement. – Divakar Mar 13 '17 at 14:02