6

I have a dataframe df as following:

   userId  pageId  tag
0  3122471  e852   18
1  3122471  f3e2   18
2  3122471  7e93   18
3  3122471  2768    6
4  3122471  53d9    6
5  3122471  06d7   15
6  3122471  e31c   15
7  3122471  c6f3    2
8  1234123  fjwe    1
9  1234123  eiae    4
10 1234123  ieha    4

After using df.groupby(['userId', 'tag'])['pageId'].count() to group the data by userId and tag . I will get:

userId   tag
3122471  2      1
         6      2
         15     2
         18     3
1234123   1     1
          4     2

Now I want to find the tag that each user has the most. Just as following:

userId   tag
3122471  18
1234123   4

(Note: if there are multiple tags that has the same count, I want to use a function my_rule to determine which to show)

weigod
  • 63
  • 1
  • 1
  • 3

3 Answers3

4

You could work on aggregated data.

In [387]: dff = df.groupby(['userId', 'tag'], as_index=False)['pageId'].count()

In [388]: dff
Out[388]:
    userId  tag  pageId
0  1234123    1       1
1  1234123    4       2
2  3122471    2       1
3  3122471    6       2
4  3122471   15       2
5  3122471   18       3

In [389]: dff.groupby('userId').apply(lambda x: x.tag[x.pageId.idxmax()])
Out[389]:
userId
1234123     4
3122471    18
dtype: int64
Zero
  • 74,117
  • 18
  • 147
  • 154
  • BTW `df.groupby(['userId'], as_index=False)['tag'].max()` seems to output the largest tag id not the most frequency tag? – weigod Jul 18 '17 at 09:41
2

group the original dataframe by userid

 df.groupby('userId').max()['tag']

or

 df.groupby('userId', as_index=False)['tag'].max()

Note that the second solution is a factor of two faster

%timeit df.groupby('userId').max()['tag']
# 100 loops, best of 3: 5.69 ms per loop
%timeit df.groupby('userId', as_index=False)['tag'].max()
# 100 loops, best of 3: 2.43 ms per loop
VinceP
  • 2,058
  • 2
  • 19
  • 29
1

I think you need DataFrameGroupBy.size with DataFrameGroupBy.idxmax, but first reset_index:

What is the difference between size and count in pandas?

df = df.groupby(['userId', 'tag'])['pageId'].size()
df = df.reset_index(level='userId')
      .groupby('userId')['pageId'].idxmax().reset_index(name='tag')
print (df)
    userId  tag
0  1234123    4
1  3122471   18

Timings:

np.random.seed(123)
N = 100000

df = pd.DataFrame(np.random.randint(1000, size=(N, 3)), columns= ['userId','pageId','tag'])
#print (df)

In [188]: %timeit (df.groupby(['userId', 'tag'], as_index=False)['pageId'].count().groupby('userId').apply(lambda x: x.tag[x.pageId.idxmax()]))
10 loops, best of 3: 180 ms per loop

In [189]: %timeit (df.groupby(['userId', 'tag'])['pageId'].size().reset_index(level='userId').groupby('userId')['pageId'].idxmax())
10 loops, best of 3: 103 ms per loop

VinceP solution is wrong, so no include to timings.

Graham
  • 7,431
  • 18
  • 59
  • 84
jezrael
  • 822,522
  • 95
  • 1,334
  • 1,252