256

If I have a table like this:

df = pd.DataFrame({
         'hID': [101, 102, 103, 101, 102, 104, 105, 101],
         'dID': [10, 11, 12, 10, 11, 10, 12, 10],
         'uID': ['James', 'Henry', 'Abe', 'James', 'Henry', 'Brian', 'Claude', 'James'],
         'mID': ['A', 'B', 'A', 'B', 'A', 'A', 'A', 'C']
})

I can do count(distinct hID) in Qlik to come up with count of 5 for unique hID. How do I do that in python using a pandas dataframe? Or maybe a numpy array? Similarly, if were to do count(hID) I will get 8 in Qlik. What is the equivalent way to do it in pandas?

o-90
  • 17,045
  • 10
  • 39
  • 63
Alhpa Delta
  • 3,385
  • 4
  • 16
  • 31
  • @piRSquared thanks. I could do something like df[['dID','hID']].agg(['count', 'size', 'nunique']) and it works. But it does not work when combined with groupby. So df[['dID','hID']].groupby('mID').agg(['count', 'size', 'nunique']) says KeyError. is there a way to select particular columns and apply a condition? – Alhpa Delta Aug 18 '17 at 17:49
  • Three ways `df[['mID', 'dID','hID']].groupby('mID').agg(['count', 'size', 'nunique'])` – piRSquared Aug 18 '17 at 17:50
  • Or `df[['dID','hID']].groupby(df['mID']).agg(['count', 'size', 'nunique'])` – piRSquared Aug 18 '17 at 17:50
  • 1
    Or `df.groupby('mID')[['dID', 'hID']].agg(['count', 'size', 'nunique'])` – piRSquared Aug 18 '17 at 17:52

8 Answers8

389

Count distinct values, use nunique:

df['hID'].nunique()
5

Count only non-null values, use count:

df['hID'].count()
8

Count total values including null values, use the size attribute:

df['hID'].size
8

Edit to add condition

Use boolean indexing:

df.loc[df['mID']=='A','hID'].agg(['nunique','count','size'])

OR using query:

df.query('mID == "A"')['hID'].agg(['nunique','count','size'])

Output:

nunique    5
count      5
size       5
Name: hID, dtype: int64
Michael Mior
  • 28,107
  • 9
  • 89
  • 113
Scott Boston
  • 147,308
  • 15
  • 139
  • 187
  • Thanks! How do we add a condition? Like nunique for mID='A'? – Alhpa Delta Aug 18 '17 at 16:11
  • How to count number of None values? I have a df of only None, and `.unique()` returns 0 – Gulzar Apr 22 '21 at 15:51
  • 1
    @Gulzar Use. `isna` like this `df['col'].isna().sum()` – Scott Boston Apr 22 '21 at 15:57
  • How to count distinct values, including nans? meaning `count` if no nans, or `count+1` if any nan exists? – Gulzar May 18 '21 at 10:34
  • 1
    @Gulzar To get count of distinct values, you use nunique. There is a parameter `dropna` default to True, but if you change it to False, it will count the distinct values and add one for NaN. Example, `df['val'].nunique(dropna=False)`. – Scott Boston May 18 '21 at 13:16
203

If I assume data is the name of your dataframe, you can do :

data['race'].value_counts()

this will show you the distinct element and their number of occurence.

OzOm18
  • 2,191
  • 1
  • 8
  • 9
  • 4
    If you want the proportions for each unique item you can also do. `data['race'].value_counts(normalize=True)` – bogus Oct 09 '19 at 17:19
54

Or get the number of unique values for each column:

df.nunique()

dID    3
hID    5
mID    3
uID    5
dtype: int64

New in pandas 0.20.0 pd.DataFrame.agg

df.agg(['count', 'size', 'nunique'])

         dID  hID  mID  uID
count      8    8    8    8
size       8    8    8    8
nunique    3    5    3    5

You've always been able to do an agg within a groupby. I used stack at the end because I like the presentation better.

df.groupby('mID').agg(['count', 'size', 'nunique']).stack()


             dID  hID  uID
mID                       
A   count      5    5    5
    size       5    5    5
    nunique    3    5    5
B   count      2    2    2
    size       2    2    2
    nunique    2    2    2
C   count      1    1    1
    size       1    1    1
    nunique    1    1    1
piRSquared
  • 285,575
  • 57
  • 475
  • 624
10

For unique count of your rows without duplications

df['hID'].nunique()

To know the number of each unique row content duplicated

df['hID'].value_counts()

fessyadedic
  • 101
  • 1
  • 3
8

You can use nunique in pandas:

df.hID.nunique()
# 5
Psidom
  • 209,562
  • 33
  • 339
  • 356
2

To count unique values in column, say hID of dataframe df, use:

len(df.hID.unique())
Das_Geek
  • 2,775
  • 7
  • 20
  • 26
Uma Raj
  • 21
  • 1
1

I was looking for something similar and I found another way you may help you

  • If you want to count the number of null values, you could use this function:
def count_nulls(s):
    return s.size - s.count()
  • If you want to include NaN values in your unique counts, you need to pass dropna=False to the nunique function.
def unique_nan(s):
    return s.nunique(dropna=False)
  • Here is a summary of all the values together using the titanic dataset:
from scipy.stats import mode

agg_func_custom_count = {
    'embark_town': ['count', 'nunique', 'size', unique_nan, count_nulls, set]
}
df.groupby(['deck']).agg(agg_func_custom_count)

You can find more info Here

GeoP
  • 27
  • 1
  • 1
  • 8
-4

you can use unique property by using len function

len(df['hID'].unique()) 5