77

what would be the most efficient way to use groupby and in parallel apply a filter in pandas?

Basically I am asking for the equivalent in SQL of

select *
...
group by col_name
having condition

I think there are many uses cases ranging from conditional means, sums, conditional probabilities, etc. which would make such a command very powerful.

I need a very good performance, so ideally such a command would not be the result of several layered operations done in python.

ivanleoncz
  • 9,070
  • 7
  • 57
  • 49
Mannaggia
  • 4,559
  • 12
  • 34
  • 47
  • 7
    @AndyHayden has written a [nice example](http://stackoverflow.com/a/18357933/190597) of using `groupby-filter`. I think the `filter` is the pandas equivalent of the `having condition`. – unutbu Feb 28 '14 at 20:55

2 Answers2

99

As mentioned in unutbu's comment, groupby's filter is the equivalent of SQL'S HAVING:

In [11]: df = pd.DataFrame([[1, 2], [1, 3], [5, 6]], columns=['A', 'B'])

In [12]: df
Out[12]:
   A  B
0  1  2
1  1  3
2  5  6

In [13]: g = df.groupby('A')  #  GROUP BY A

In [14]: g.filter(lambda x: len(x) > 1)  #  HAVING COUNT(*) > 1
Out[14]:
   A  B
0  1  2
1  1  3

You can write more complicated functions (these are applied to each group), provided they return a plain ol' bool:

In [15]: g.filter(lambda x: x['B'].sum() == 5)
Out[15]:
   A  B
0  1  2
1  1  3

Note: potentially there is a bug where you can't write you function to act on the columns you've used to groupby... a workaround is the groupby the columns manually i.e. g = df.groupby(df['A'])).

Andy Hayden
  • 359,921
  • 101
  • 625
  • 535
  • 1
    maybe add to the Comparison with SQL section? (this example) – Jeff Feb 28 '14 at 22:59
  • 1
    @Jeff good call, looking at it, I wonder if should think about example of using agg *and* filter at the same time (I don't think there is a way to do that easily without doing a second groupby...) that could be this question :s – Andy Hayden Feb 28 '14 at 23:20
  • 4
    just chain em (but you do need a second groupby): ``DataFrame([[1, 2], [1, 3], [2, 5], [2, 8], [5, 6]], columns=['A', 'B']).groupby('A').filter(lambda x: len(x)>1).groupby('A').sum()`` – Jeff Feb 28 '14 at 23:27
  • For more complicated cases I had to use `.apply()` https://stackoverflow.com/questions/23394476/keep-other-columns-when-using-min-with-groupby/49476152#49476152 – citynorman Mar 25 '18 at 13:09
0

I group by state and county where max is greater than 20 then subquery the resulting values for True using the dataframe loc

counties=df.groupby(['state','county'])['field1'].max()>20
counties=counties.loc[counties.values==True]
Golden Lion
  • 3,840
  • 2
  • 26
  • 35