51

I have a Pandas dataframe and I want to find all the unique values in that dataframe...irrespective of row/columns. If I have a 10 x 10 dataframe, and suppose they have 84 unique values, I need to find them - Not the count.

I can create a set and add the values of each rows by iterating over the rows of the dataframe. But, I feel that it may be inefficient (cannot justify that). Is there an efficient way to find it? Is there a predefined function?

Monica Heddneck
  • 2,973
  • 10
  • 55
  • 89
user1717931
  • 2,419
  • 5
  • 29
  • 40

2 Answers2

74
In [1]: df = DataFrame(np.random.randint(0,10,size=100).reshape(10,10))

In [2]: df
Out[2]: 
   0  1  2  3  4  5  6  7  8  9
0  2  2  3  2  6  1  9  9  3  3
1  1  2  5  8  5  2  5  0  6  3
2  0  7  0  7  5  5  9  1  0  3
3  5  3  2  3  7  6  8  3  8  4
4  8  0  2  2  3  9  7  1  2  7
5  3  2  8  5  6  4  3  7  0  8
6  4  2  6  5  3  3  4  5  3  2
7  7  6  0  6  6  7  1  7  5  1
8  7  4  3  1  0  6  9  7  7  3
9  5  3  4  5  2  0  8  6  4  7

In [13]: Series(df.values.ravel()).unique()
Out[13]: array([9, 1, 4, 6, 0, 7, 5, 8, 3, 2])

Numpy unique sorts, so its faster to do it this way (and then sort if you need to)

In [14]: df = DataFrame(np.random.randint(0,10,size=10000).reshape(100,100))

In [15]: %timeit Series(df.values.ravel()).unique()
10000 loops, best of 3: 137 ᄉs per loop

In [16]: %timeit np.unique(df.values.ravel())
1000 loops, best of 3: 270 ᄉs per loop
Jeff
  • 125,376
  • 21
  • 220
  • 187
  • 1
    For larger arrays it's faster to use pd.unique which doesn't sort. – Andy Hayden Nov 20 '13 at 00:37
  • 24
    Even better is to `pd.unique(df.values.ravel())`, which avoids creating the Series :) – Andy Hayden Nov 20 '13 at 00:53
  • 2
    Many thanks @Andy and Jeff. Learning Pandas, Scipy/Numpy very fast...with expert help from SO ! – user1717931 Nov 20 '13 at 19:30
  • 1
    I have a very large df with date values and the following workflow is significantly faster for me `cols = df.columns; df['dummy'] = 0.0; df.groupby(cols)[['dummy']].size().reset_index().drop('dummy',axis=1)` – kalu May 21 '14 at 16:45
7

Or you can use:

df.stack().unique()

Then you don't need to worry if you have NaN values, as they are excluded when doing the stacking.

user1506145
  • 5,176
  • 11
  • 46
  • 75