204
df = pd.DataFrame({'Col1': ['Bob', 'Joe', 'Bill', 'Mary', 'Joe'],
                   'Col2': ['Joe', 'Steve', 'Bob', 'Bob', 'Steve'],
                   'Col3': np.random.random(5)})

What is the best way to return the unique values of 'Col1' and 'Col2'?

The desired output is

'Bob', 'Joe', 'Bill', 'Mary', 'Steve'
Alex Riley
  • 169,130
  • 45
  • 262
  • 238
user2333196
  • 5,406
  • 7
  • 31
  • 35
  • 8
    See also [unique combinations of values in selected columns in pandas data frame and count](https://stackoverflow.com/questions/35268817/unique-combinations-of-values-in-selected-columns-in-pandas-data-frame-and-count) for a different but related question. The selected answer there uses `df1.groupby(['A','B']).size().reset_index().rename(columns={0:'count'})` – Paul Rougieux Jun 20 '19 at 09:34

12 Answers12

284

pd.unique returns the unique values from an input array, or DataFrame column or index.

The input to this function needs to be one-dimensional, so multiple columns will need to be combined. The simplest way is to select the columns you want and then view the values in a flattened NumPy array. The whole operation looks like this:

>>> pd.unique(df[['Col1', 'Col2']].values.ravel('K'))
array(['Bob', 'Joe', 'Bill', 'Mary', 'Steve'], dtype=object)

Note that ravel() is an array method that returns a view (if possible) of a multidimensional array. The argument 'K' tells the method to flatten the array in the order the elements are stored in the memory (pandas typically stores underlying arrays in Fortran-contiguous order; columns before rows). This can be significantly faster than using the method's default 'C' order.


An alternative way is to select the columns and pass them to np.unique:

>>> np.unique(df[['Col1', 'Col2']].values)
array(['Bill', 'Bob', 'Joe', 'Mary', 'Steve'], dtype=object)

There is no need to use ravel() here as the method handles multidimensional arrays. Even so, this is likely to be slower than pd.unique as it uses a sort-based algorithm rather than a hashtable to identify unique values.

The difference in speed is significant for larger DataFrames (especially if there are only a handful of unique values):

>>> df1 = pd.concat([df]*100000, ignore_index=True) # DataFrame with 500000 rows
>>> %timeit np.unique(df1[['Col1', 'Col2']].values)
1 loop, best of 3: 1.12 s per loop

>>> %timeit pd.unique(df1[['Col1', 'Col2']].values.ravel('K'))
10 loops, best of 3: 38.9 ms per loop

>>> %timeit pd.unique(df1[['Col1', 'Col2']].values.ravel()) # ravel using C order
10 loops, best of 3: 49.9 ms per loop
Poe Dator
  • 4,535
  • 2
  • 14
  • 35
Alex Riley
  • 169,130
  • 45
  • 262
  • 238
  • 5
    How do you get a dataframe back instead of an array? – Lisle Jun 03 '16 at 14:57
  • 1
    @Lisle: both methods return a NumPy array, so you'll have to construct it manually, e.g., `pd.DataFrame(unique_values)`. There's no good way to get back a DataFrame directly. – Alex Riley Nov 08 '17 at 12:41
  • @Lisle since he has used pd.unique it returns a numpy.ndarray as a final output. Is this what you were asking? – Ash Upadhyay Sep 05 '19 at 12:15
  • 6
    @Lisle, maybe this one df = df.drop_duplicates(subset=['C1','C2','C3'])? – tickly potato Jun 15 '20 at 19:11
  • 2
    To get only the columns you need into a dataframe you could do df.groupby(['C1', 'C2', 'C3']).size().reset_index().drop(columns=0). This will do a group by which will by default pick the unique combinations and calculate the count of items per group The reset_index will change from multi-index to flat 2 dimensional. And the end is to remove the count of items column. – andrnev May 03 '21 at 10:02
15

I have setup a DataFrame with a few simple strings in it's columns:

>>> df
   a  b
0  a  g
1  b  h
2  d  a
3  e  e

You can concatenate the columns you are interested in and call unique function:

>>> pandas.concat([df['a'], df['b']]).unique()
array(['a', 'b', 'd', 'e', 'g', 'h'], dtype=object)
Mike
  • 6,813
  • 4
  • 29
  • 50
  • 1
    This doesn't work when you have something like this `this_is_uniuqe = { 'col1': ["Hippo", "H"], "col2": ["potamus", "ippopotamus"], } ` – sixtyfootersdude Nov 11 '20 at 00:17
11
In [5]: set(df.Col1).union(set(df.Col2))
Out[5]: {'Bill', 'Bob', 'Joe', 'Mary', 'Steve'}

Or:

set(df.Col1) | set(df.Col2)
James Little
  • 1,964
  • 13
  • 12
7

An updated solution using numpy v1.13+ requires specifying the axis in np.unique if using multiple columns, otherwise the array is implicitly flattened.

import numpy as np

np.unique(df[['col1', 'col2']], axis=0)

This change was introduced Nov 2016: https://github.com/numpy/numpy/commit/1f764dbff7c496d6636dc0430f083ada9ff4e4be

erikreed
  • 1,447
  • 1
  • 16
  • 21
4

for those of us that love all things pandas, apply, and of course lambda functions:

df['Col3'] = df[['Col1', 'Col2']].apply(lambda x: ''.join(x), axis=1)
Lisle
  • 1,620
  • 2
  • 16
  • 22
3

here's another way


import numpy as np
set(np.concatenate(df.values))
muon
  • 12,821
  • 11
  • 69
  • 88
2

Non-pandas solution: using set().

import pandas as pd
import numpy as np

df = pd.DataFrame({'Col1' : ['Bob', 'Joe', 'Bill', 'Mary', 'Joe'],
              'Col2' : ['Joe', 'Steve', 'Bob', 'Bob', 'Steve'],
               'Col3' : np.random.random(5)})

print df

print set(df.Col1.append(df.Col2).values)

Output:

   Col1   Col2      Col3
0   Bob    Joe  0.201079
1   Joe  Steve  0.703279
2  Bill    Bob  0.722724
3  Mary    Bob  0.093912
4   Joe  Steve  0.766027
set(['Steve', 'Bob', 'Bill', 'Joe', 'Mary'])
WGS
  • 13,969
  • 4
  • 48
  • 51
1
df = pd.DataFrame({'Col1': ['Bob', 'Joe', 'Bill', 'Mary', 'Joe'],
               'Col2': ['Joe', 'Steve', 'Bob', 'Bob', 'Steve'],
               'Col3': np.random.random(5)})

If your question is how to get the unique values of each column individually?

Sort the column labels in a list

column_labels = ['Col1', 'Col2']

Create an empty dict

unique_dict = {}

Iterate over selected columns to get their unique values

for column_label in column_labels: 
    unique_values = df[column_label].unique()
    unique_dict.update({column_label: unique_values})
unique_ser = pd.Series(unique_dict)
print(unique_ser)
0
list(set(df[['Col1', 'Col2']].as_matrix().reshape((1,-1)).tolist()[0]))

The output will be ['Mary', 'Joe', 'Steve', 'Bob', 'Bill']

smishra
  • 3,122
  • 29
  • 31
  • `DataFrame` object has no attribute `as_matrix`. – arilwan Jan 19 '23 at 13:46
  • Depending on which version you are using. Please see https://pandas.pydata.org/pandas-docs/version/0.25.1/reference/api/pandas.DataFrame.as_matrix.html – smishra Jan 25 '23 at 22:37
0

Get a list of unique values given a list of column names:

cols = ['col1','col2','col3','col4']
unique_l = pd.concat([df[col] for col in cols]).unique()
BSalita
  • 8,420
  • 10
  • 51
  • 68
0

You can use stack to combine multiple columns and drop_duplicates to find unique values:

df[['Col1', 'Col2']].stack().drop_duplicates().tolist()

Output:

['Bob', 'Joe', 'Steve', 'Bill', 'Mary']
Mykola Zotko
  • 15,583
  • 3
  • 71
  • 73
-1
import pandas as pd
df= pd.DataFrame({'col1':["a","a","b","c","c","d"],'col2': 
                ["x","x","y","y","z","w"],'col3':[1,2,2,3,4,2]})
df

output is

  col1 col2 col3
0   a   x   1
1   a   x   2
2   b   y   2
3   c   y   3
4   c   z   4
5   d   w   2

to get the unique values from all the columns

    a={}
    for i in range(df.shape[1]) :
        j=df.columns[i]
        a[j] = df.iloc[:,i].unique()

   for p,q in a.items():
       print( f"unique value in {p} are {list(q)} ")

ouput is

    unique value in col1 are ['a', 'b', 'c', 'd'] 
    unique value in col2 are ['x', 'y', 'z', 'w'] 
    unique value in col3 are [1, 2, 3, 4]