475

How do I find all rows in a pandas DataFrame which have the max value for count column, after grouping by ['Sp','Mt'] columns?

Example 1: the following DataFrame:

   Sp   Mt Value   count
0  MM1  S1   a     **3**
1  MM1  S1   n       2
2  MM1  S3   cb    **5**
3  MM2  S3   mk    **8**
4  MM2  S4   bg    **10**
5  MM2  S4   dgd     1
6  MM4  S2   rd      2
7  MM4  S2   cb      2
8  MM4  S2   uyi   **7**

Expected output is to get the result rows whose count is max in each group, like this:

   Sp   Mt   Value  count
0  MM1  S1   a      **3**
2  MM1  S3   cb     **5**
3  MM2  S3   mk     **8**
4  MM2  S4   bg     **10** 
8  MM4  S2   uyi    **7**

Example 2:

   Sp   Mt   Value  count
4  MM2  S4   bg     10
5  MM2  S4   dgd    1
6  MM4  S2   rd     2
7  MM4  S2   cb     8
8  MM4  S2   uyi    8

Expected output:

   Sp   Mt   Value  count
4  MM2  S4   bg     10
7  MM4  S2   cb     8
8  MM4  S2   uyi    8
wjandrea
  • 28,235
  • 9
  • 60
  • 81
jojo12
  • 4,853
  • 3
  • 14
  • 7

15 Answers15

589

Firstly, we can get the max count for each group like this:

In [1]: df
Out[1]:
    Sp  Mt Value  count
0  MM1  S1     a      3
1  MM1  S1     n      2
2  MM1  S3    cb      5
3  MM2  S3    mk      8
4  MM2  S4    bg     10
5  MM2  S4   dgd      1
6  MM4  S2    rd      2
7  MM4  S2    cb      2
8  MM4  S2   uyi      7

In [2]: df.groupby(['Sp', 'Mt'])['count'].max()
Out[2]:
Sp   Mt
MM1  S1     3
     S3     5
MM2  S3     8
     S4    10
MM4  S2     7
Name: count, dtype: int64

To get the indices of the original DF you can do:

In [3]: idx = df.groupby(['Sp', 'Mt'])['count'].transform(max) == df['count']

In [4]: df[idx]
Out[4]:
    Sp  Mt Value  count
0  MM1  S1     a      3
2  MM1  S3    cb      5
3  MM2  S3    mk      8
4  MM2  S4    bg     10
8  MM4  S2   uyi      7

Note that if you have multiple max values per group, all will be returned.


Update

On a Hail Mary chance that this is what the OP is requesting:

In [5]: df['count_max'] = df.groupby(['Sp', 'Mt'])['count'].transform(max)

In [6]: df
Out[6]:
    Sp  Mt Value  count  count_max
0  MM1  S1     a      3          3
1  MM1  S1     n      2          3
2  MM1  S3    cb      5          5
3  MM2  S3    mk      8          8
4  MM2  S4    bg     10         10
5  MM2  S4   dgd      1         10
6  MM4  S2    rd      2          7
7  MM4  S2    cb      2          7
8  MM4  S2   uyi      7          7
wjandrea
  • 28,235
  • 9
  • 60
  • 81
Zelazny7
  • 39,946
  • 18
  • 70
  • 84
  • 5
    @Zelazny7 I'm using the second, `idx` approach. But, I can only afford to a single maximum for each group (and my data has a few duplicate-max's). is there a way to get around this with your solution? – 3pitt Jan 03 '18 at 20:36
  • `transform` method may have pool performance when the data set is large enough, get the max value first then merge the dataframes will be better. – Woods Chen Apr 10 '19 at 02:54
  • As @3pitt mentioned, this is wrong for the original question asked. – Prakash Vanapalli May 31 '23 at 03:15
  • @PrakashVanapalli no it isn't – Zelazny7 Jun 01 '23 at 19:12
306

You can sort the dataFrame by count and then remove duplicates. I think it's easier:

df.sort_values('count', ascending=False).drop_duplicates(['Sp','Mt'])
Rani
  • 6,424
  • 1
  • 23
  • 31
  • 10
    Very nice! Fast with largish frames (25k rows) – Nolan Conaway Sep 27 '17 at 18:23
  • 3
    For those who are somewhat new with Python, you will need to assign this to a new variable, it doesn't change the current df variable. – Tyler Dec 27 '18 at 17:14
  • 4
    @Samir or use `inplace = True` as an argument to `drop_duplicates` – TMrtSmith Feb 04 '19 at 13:11
  • 13
    This is a great answer when need only one of rows with the same max values, however it wont work as expected if I need all the rows with max values. – Woods Chen Apr 10 '19 at 02:50
  • @WoodsChen it should work. It keeps the max value per key of [sp,mt] – Rani Apr 10 '19 at 04:28
  • 3
    I mean if the dataframe is pd.DataFrame({'sp':[1, 1, 2], 'mt':[1, 1, 2], 'value':[2, 2, 3]}, then there will be 2 rows with the same max value 2 in the group where sp==1 and mt==2. @Rani – Woods Chen Apr 11 '19 at 09:37
  • 1
    @WoodsChen, it drops duplicates of [sp, mt], therefore in your example, the output should be only one row. – Rani Apr 11 '19 at 15:54
  • You can address the issue of sort ties by using a different `kind` argument. Look in https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.sort_values.html for more information; some sort kinds are stable. – Manuel Martinez Aug 18 '22 at 19:43
  • this was many times faster with millions of rows than the more readable `.groupby(...).max()` approach. – Marc Maxmeister Nov 28 '22 at 22:25
117

Easy solution would be to apply the idxmax() function to get indices of rows with max values. This would filter out all the rows with max value in the group.

In [367]: df
Out[367]: 
    sp  mt  val  count
0  MM1  S1    a      3
1  MM1  S1    n      2
2  MM1  S3   cb      5
3  MM2  S3   mk      8
4  MM2  S4   bg     10
5  MM2  S4  dgb      1
6  MM4  S2   rd      2
7  MM4  S2   cb      2
8  MM4  S2  uyi      7


# Apply idxmax() and use .loc() on dataframe to filter the rows with max values:
In [368]: df.loc[df.groupby(["sp", "mt"])["count"].idxmax()]
Out[368]: 
    sp  mt  val  count
0  MM1  S1    a      3
2  MM1  S3   cb      5
3  MM2  S3   mk      8
4  MM2  S4   bg     10
8  MM4  S2  uyi      7


# Just to show what values are returned by .idxmax() above:
In [369]: df.groupby(["sp", "mt"])["count"].idxmax().values
Out[369]: array([0, 2, 3, 4, 8])
wjandrea
  • 28,235
  • 9
  • 60
  • 81
Surya
  • 11,002
  • 4
  • 57
  • 39
  • 18
    The questioner here specified `"I want to get ALL the rows where count equals max in each group"`, while `idxmax` `Return[s] index of first occurrence of maximum over requested axis"` according to the docs (0.21). – Max Power Dec 19 '17 at 11:55
  • 11
    This is a great solution, but for a different problem – Carlos Souza Oct 27 '19 at 18:40
77

You may not need to do groupby(), but use both sort_values + drop_duplicates

df.sort_values('count').drop_duplicates(['Sp', 'Mt'], keep='last')
Out[190]: 
    Sp  Mt Value  count
0  MM1  S1     a      3
2  MM1  S3    cb      5
8  MM4  S2   uyi      7
3  MM2  S3    mk      8
4  MM2  S4    bg     10

Also almost same logic by using tail

df.sort_values('count').groupby(['Sp', 'Mt']).tail(1)
Out[52]: 
    Sp  Mt Value  count
0  MM1  S1     a      3
2  MM1  S3    cb      5
8  MM4  S2   uyi      7
3  MM2  S3    mk      8
4  MM2  S4    bg     10
blackraven
  • 5,284
  • 7
  • 19
  • 45
BENY
  • 317,841
  • 20
  • 164
  • 234
41

Having tried the solution suggested by Zelazny on a relatively large DataFrame (~400k rows) I found it to be very slow. Here is an alternative that I found to run orders of magnitude faster on my data set.

df = pd.DataFrame({
    'sp' : ['MM1', 'MM1', 'MM1', 'MM2', 'MM2', 'MM2', 'MM4', 'MM4', 'MM4'],
    'mt' : ['S1', 'S1', 'S3', 'S3', 'S4', 'S4', 'S2', 'S2', 'S2'],
    'val' : ['a', 'n', 'cb', 'mk', 'bg', 'dgb', 'rd', 'cb', 'uyi'],
    'count' : [3,2,5,8,10,1,2,2,7]
    })

df_grouped = df.groupby(['sp', 'mt']).agg({'count':'max'})

df_grouped = df_grouped.reset_index()

df_grouped = df_grouped.rename(columns={'count':'count_max'})

df = pd.merge(df, df_grouped, how='left', on=['sp', 'mt'])

df = df[df['count'] == df['count_max']]
landewednack
  • 583
  • 1
  • 5
  • 9
  • 2
    indeed this is much faster. transform seems to be slow for large dataset. – goh Jul 11 '14 at 06:30
  • 1
    Can you add comments to explain what each line does? – tommy.carstensen Mar 26 '17 at 00:47
  • 1
    fwiw: I found the more elegant-looking solution from @Zelazny7 took a long time to execute for my set of ~100K rows, but this one ran pretty quickly. (I'm running a now way-obsolete 0.13.0, which might account for slowness). – Roland May 04 '17 at 21:25
  • 3
    But doing this `df[df['count'] == df['count_max']]` will lose NaN rows, as well as the answers above. – Qy Zuo Jul 20 '17 at 07:38
  • I highly suggest to use this approach, for bigger data frames it is much faster to use .appy() or .agg(). – Gerard Sep 18 '18 at 05:37
  • I am using pandas version 0.25.1 and timed everything in jupyter (extended the lists with `*1000000`). I found the @Zelazny7 solution to be faster. Did I miss something in the test or is it due to the new pandas version? – Eulenfuchswiesel Sep 30 '19 at 11:38
17

Use groupby and idxmax methods:

  1. transfer col date to datetime:

    df['date'] = pd.to_datetime(df['date'])
    
  2. get the index of max of column date, after groupyby ad_id:

    idx = df.groupby(by='ad_id')['date'].idxmax()
    
  3. get the wanted data:

    df_max = df.loc[idx,]
    
   ad_id  price       date
7     22      2 2018-06-11
6     23      2 2018-06-22
2     24      2 2018-06-30
3     28      5 2018-06-22
wjandrea
  • 28,235
  • 9
  • 60
  • 81
blueear
  • 273
  • 2
  • 4
  • 1
    `date` column??? This seems like the answer to a different question. Otherwise, it's a duplicate of [Surya's answer](/a/44960833/4518341) and it has the same problem: in case of a tie, only the first occurrence is kept. – wjandrea Feb 18 '23 at 21:41
14

For me, the easiest solution would be keep value when count is equal to the maximum. Therefore, the following one line command is enough :

df[df['count'] == df.groupby(['Mt'])['count'].transform(max)]
PAC
  • 5,178
  • 8
  • 38
  • 62
  • This is the same solution as [Zelazny7's answer](/a/15705958/4518341). Please don't post duplicate answers. – wjandrea Feb 18 '23 at 21:44
9

Summarizing, there are many ways, but which one is faster?

import pandas as pd
import numpy as np
import time

df = pd.DataFrame(np.random.randint(1,10,size=(1000000, 2)), columns=list('AB'))

start_time = time.time()
df1idx = df.groupby(['A'])['B'].transform(max) == df['B']
df1 = df[df1idx]
print("---1 ) %s seconds ---" % (time.time() - start_time))

start_time = time.time()
df2 = df.sort_values('B').groupby(['A']).tail(1)
print("---2 ) %s seconds ---" % (time.time() - start_time))

start_time = time.time()
df3 = df.sort_values('B').drop_duplicates(['A'],keep='last')
print("---3 ) %s seconds ---" % (time.time() - start_time))

start_time = time.time()
df3b = df.sort_values('B', ascending=False).drop_duplicates(['A'])
print("---3b) %s seconds ---" % (time.time() - start_time))

start_time = time.time()
df4 = df[df['B'] == df.groupby(['A'])['B'].transform(max)]
print("---4 ) %s seconds ---" % (time.time() - start_time))

start_time = time.time()
d = df.groupby('A')['B'].nlargest(1)
df5 = df.iloc[[i[1] for i in d.index], :]
print("---5 ) %s seconds ---" % (time.time() - start_time))

And the winner is...

  • --1 ) 0.03337574005126953 seconds ---
  • --2 ) 0.1346898078918457 seconds ---
  • --3 ) 0.10243558883666992 seconds ---
  • --3b) 0.1004343032836914 seconds ---
  • --4 ) 0.028397560119628906 seconds ---
  • --5 ) 0.07552886009216309 seconds ---
Mauro Mascia
  • 401
  • 5
  • 15
  • 1
    Great job including the timer which is missing from all of these suggestions. There are a few more and importantly, it would also be good to add it on a larger dataset. Using 2.8 million rows with varying amount of duplicates shows some startling figures. Especially using the nlargest fails spectacularly (like more than 100 fold slower) on large data. The fastest for my data was the sort by then drop duplicate (drop all but last marginally faster than sort descending and drop all but first) – Jon Aug 02 '22 at 03:05
8

Try using nlargest on the groupby object. The advantage is that it returns the rows where "the nlargest item(s)" were fetched from, and we can get their index.

In this case, we want n=1 for the max and keep='all' to include duplicate maxes.

Note: we slice the last (-1) element of our index since our index in this case consist of tuples (e.g. ('MM1', 'S1', 0)).

df = pd.DataFrame({
    'Sp': ['MM1', 'MM1', 'MM1', 'MM2', 'MM2', 'MM2', 'MM4', 'MM4','MM4'],
    'Mt': ['S1', 'S1', 'S3', 'S3', 'S4', 'S4', 'S2', 'S2', 'S2'],
    'Val': ['a', 'n', 'cb', 'mk', 'bg', 'dgb', 'rd', 'cb', 'uyi'],
    'count': [3, 2, 5, 8, 10, 1, 2, 2, 7]
})

d = df.groupby(['Sp', 'Mt'])['count'].nlargest(1, keep='all')

df.loc[[i[-1] for i in d.index]]
    Sp  Mt  Val  count
0  MM1  S1    a      3
2  MM1  S3   cb      5
3  MM2  S3   mk      8
4  MM2  S4   bg     10
8  MM4  S2  uyi      7
wjandrea
  • 28,235
  • 9
  • 60
  • 81
Kweweli
  • 327
  • 3
  • 7
6

Realizing that "applying" "nlargest" to groupby object works just as fine:

Additional advantage - also can fetch top n values if required:

In [85]: import pandas as pd

In [86]: df = pd.DataFrame({
    ...: 'sp' : ['MM1', 'MM1', 'MM1', 'MM2', 'MM2', 'MM2', 'MM4', 'MM4','MM4'],
    ...: 'mt' : ['S1', 'S1', 'S3', 'S3', 'S4', 'S4', 'S2', 'S2', 'S2'],
    ...: 'val' : ['a', 'n', 'cb', 'mk', 'bg', 'dgb', 'rd', 'cb', 'uyi'],
    ...: 'count' : [3,2,5,8,10,1,2,2,7]
    ...: })

## Apply nlargest(1) to find the max val df, and nlargest(n) gives top n values for df:
In [87]: df.groupby(["sp", "mt"]).apply(lambda x: x.nlargest(1, "count")).reset_index(drop=True)
Out[87]:
   count  mt   sp  val
0      3  S1  MM1    a
1      5  S3  MM1   cb
2      8  S3  MM2   mk
3     10  S4  MM2   bg
4      7  S2  MM4  uyi
Surya
  • 11,002
  • 4
  • 57
  • 39
4

If you sort your DataFrame that ordering will be preserved in the groupby. You can then just grab the first or last element and reset the index.

df = pd.DataFrame({
    'sp' : ['MM1', 'MM1', 'MM1', 'MM2', 'MM2', 'MM2', 'MM4', 'MM4','MM4'],
    'mt' : ['S1', 'S1', 'S3', 'S3', 'S4', 'S4', 'S2', 'S2', 'S2'],
    'val' : ['a', 'n', 'cb', 'mk', 'bg', 'dgb', 'rd', 'cb', 'uyi'],
    'count' : [3,2,5,8,10,1,2,2,7]
})

df.sort_values("count", ascending=False).groupby(["sp", "mt"]).first().reset_index()
nbertagnolli
  • 418
  • 5
  • 10
3

I've been using this functional style for many group operations:

df = pd.DataFrame({
    'Sp': ['MM1', 'MM1', 'MM1', 'MM2', 'MM2', 'MM2', 'MM4', 'MM4', 'MM4'],
    'Mt': ['S1', 'S1', 'S3', 'S3', 'S4', 'S4', 'S2', 'S2', 'S2'],
    'Val': ['a', 'n', 'cb', 'mk', 'bg', 'dgb', 'rd', 'cb', 'uyi'],
    'Count': [3, 2, 5, 8, 10, 1, 2, 2, 7]
})

(df.groupby(['Sp', 'Mt'])
   .apply(lambda group: group[group['Count'] == group['Count'].max()])
   .reset_index(drop=True))

    Sp  Mt  Val  Count
0  MM1  S1    a      3
1  MM1  S3   cb      5
2  MM2  S3   mk      8
3  MM2  S4   bg     10
4  MM4  S2  uyi      7

.reset_index(drop=True) gets you back to the original index by dropping the group-index.

wjandrea
  • 28,235
  • 9
  • 60
  • 81
joh-mue
  • 1,601
  • 13
  • 20
  • Instead of `reset_index`, you could consider `.droplevel([0])`, with `.groupby(..., as_index=False)` – wjandrea Feb 19 '23 at 02:02
3

Many of these are great answers, but to help show scalability, on 2.8 million rows with varying amount of duplicates shows some startling differences. The fastest for my data was the sort by then drop duplicate (drop all but last marginally faster than sort descending and drop all but first)

  1. Sort Ascending, Drop duplicate keep last (2.22 s)
  2. Sort Descending, Drop Duplicate keep First (2.32 s)
  3. Transform Max within the loc function (3.73 s)
  4. Transform Max storing IDX then using loc select as second step (3.84 s)
  5. Groupby using Tail (8.98 s)
  6. IDMax with groupby and then using loc select as second step (95.39 s)
  7. IDMax with groupby within the loc select (95.74 s)
  8. NLargest(1) then using iloc select as a second step (> 35000 s ) - did not finish after running overnight
  9. NLargest(1) within iloc select (> 35000 s ) - did not finish after running overnight

As you can see Sort is 1/3 faster than transform and 75% faster than groupby. Everything else is up to 40x slower. In small datasets, this may not matter by much, but as you can see, this can significantly impact large datasets.

Jon
  • 734
  • 1
  • 7
  • 30
  • Good guides to performance for those who are using one of these methods! –  Aug 29 '22 at 15:30
2
df = pd.DataFrame({
'sp' : ['MM1', 'MM1', 'MM1', 'MM2', 'MM2', 'MM2', 'MM4', 'MM4','MM4'],
'mt' : ['S1', 'S1', 'S3', 'S3', 'S4', 'S4', 'S2', 'S2', 'S2'],
'val' : ['a', 'n', 'cb', 'mk', 'bg', 'dgb', 'rd', 'cb', 'uyi'],
'count' : [3,2,5,8,10,1,2,2,7]
})

df.groupby(['sp', 'mt']).apply(lambda grp: grp.nlargest(1, 'count'))
George Liu
  • 3,601
  • 10
  • 43
  • 69
0

df.loc[df.groupby('mt')['count'].idxmax()]

if the df index isn't unique you may need this step df.reset_index(inplace=True) first.

upuil
  • 67
  • 2
  • 8