7

Let's say I have a pandas df like so:

Index   A     B
0      foo    3
1      foo    2
2      foo    5
3      bar    3
4      bar    4
5      baz    5

What's a good fast way to add a column like so:

Index   A     B    Aidx
0      foo    3    0
1      foo    2    0
2      foo    5    0
3      bar    3    1
4      bar    4    1
5      baz    5    2

I.e. adding an increasing index for each unique value?

I know I could use df.unique(), then use a dict and enumerate to create a lookup, and then apply that dictionary lookup to create the column. But I feel like there should be faster way, possibly involving groupby with some special function?

cadolphs
  • 9,014
  • 1
  • 24
  • 41

3 Answers3

7

One way is to use ngroup. Just remember you have to make sure your groupby isn't resorting the groups to get your desired output, so set sort=False:

df['Aidx'] = df.groupby('A',sort=False).ngroup()
>>> df
   Index    A  B  Aidx
0      0  foo  3     0
1      1  foo  2     0
2      2  foo  5     0
3      3  bar  3     1
4      4  bar  4     1
5      5  baz  5     2
sacuL
  • 49,704
  • 8
  • 81
  • 106
7

No need groupby using


Method 1factorize

pd.factorize(df.A)[0]
array([0, 0, 0, 1, 1, 2], dtype=int64)
#df['Aidx']=pd.factorize(df.A)[0]

Method 2 sklearn

from sklearn import preprocessing
le = preprocessing.LabelEncoder()
le.fit(df.A)
LabelEncoder()
le.transform(df.A)
array([2, 2, 2, 0, 0, 1])

Method 3 cat.codes

df.A.astype('category').cat.codes

Method 4 map + unique

l=df.A.unique()
df.A.map(dict(zip(l,range(len(l)))))
0    0
1    0
2    0
3    1
4    1
5    2
Name: A, dtype: int64

Method 5 np.unique

x,y=np.unique(df.A.values,return_inverse=True)
y
array([2, 2, 2, 0, 0, 1], dtype=int64)

EDIT: Some timings with OP's dataframe

'''

%timeit pd.factorize(view.Company)[0]

The slowest run took 6.68 times longer than the fastest. This could mean that an intermediate result is being cached.
10000 loops, best of 3: 155 µs per loop

%timeit view.Company.astype('category').cat.codes

The slowest run took 4.48 times longer than the fastest. This could mean that an intermediate result is being cached.
1000 loops, best of 3: 449 µs per loop

from itertools import izip

%timeit l = view.Company.unique(); view.Company.map(dict(izip(l,xrange(len(l)))))

1000 loops, best of 3: 666 µs per loop

import numpy as np

%timeit np.unique(view.Company.values, return_inverse=True)

The slowest run took 8.08 times longer than the fastest. This could mean that an intermediate result is being cached.
10000 loops, best of 3: 32.7 µs per loop

Seems like numpy wins.

cadolphs
  • 9,014
  • 1
  • 24
  • 41
BENY
  • 317,841
  • 20
  • 164
  • 234
  • Good solutions, they should be really fast as well. May be add time comparison as OP is looking for the most efficient solution – Vaishali Dec 14 '18 at 02:11
  • @Vaishali sorry it is hard for me to get the timing , would you mind add that for me , thanks a lot – BENY Dec 14 '18 at 02:23
5

One more method of doing so could be.

df['C'] = i.ne(df.A.shift()).cumsum()-1
df

When we print df value it will be as follows.

  Index  A    B  C
0  0     foo  3  0
1  1     foo  2  0 
2  2     foo  5  0 
3  3     bar  3  1 
4  4     bar  4  1 
5  5     baz  5  2

Explanation of solution: Let's break above solution into parts for understanding purposes.

1st step: Compare df's A column by shifting its value down to itself as follows.

i.ne(df.A.shift())

Output we will get is:

0     True
1    False
2    False
3     True
4    False
5     True

2nd step: Use of cumsum() function, so wherever TRUE value is coming(which will come when a match of A column and its shift is NOT found) it will call cumsum() function and its value will be increased.

i.ne(df.A.shift()).cumsum()-1
0    0
1    0
2    0
3    1
4    1
5    2
Name: A, dtype: int32

3rd step: Save command's value into df['C'] which will create a new column named C in df.

RavinderSingh13
  • 130,504
  • 14
  • 57
  • 93