214

How do I access the corresponding groupby dataframe in a groupby object by the key?

With the following groupby:

rand = np.random.RandomState(1)
df = pd.DataFrame({'A': ['foo', 'bar'] * 3,
                   'B': rand.randn(6),
                   'C': rand.randint(0, 20, 6)})
gb = df.groupby(['A'])

I can iterate through it to get the keys and groups:

In [11]: for k, gp in gb:
             print 'key=' + str(k)
             print gp
key=bar
     A         B   C
1  bar -0.611756  18
3  bar -1.072969  10
5  bar -2.301539  18
key=foo
     A         B   C
0  foo  1.624345   5
2  foo -0.528172  11
4  foo  0.865408  14

I would like to be able to access a group by its key:

In [12]: gb['foo']
Out[12]:  
     A         B   C
0  foo  1.624345   5
2  foo -0.528172  11
4  foo  0.865408  14

But when I try doing that with gb[('foo',)] I get this weird pandas.core.groupby.DataFrameGroupBy object thing which doesn't seem to have any methods that correspond to the DataFrame I want.

The best I could think of is:

In [13]: def gb_df_key(gb, key, orig_df):
             ix = gb.indices[key]
             return orig_df.ix[ix]

         gb_df_key(gb, 'foo', df)
Out[13]:
     A         B   C
0  foo  1.624345   5
2  foo -0.528172  11
4  foo  0.865408  14  

but this is kind of nasty, considering how nice pandas usually is at these things.
What's the built-in way of doing this?

smci
  • 32,567
  • 20
  • 113
  • 146
beardc
  • 20,283
  • 17
  • 76
  • 94

6 Answers6

274

You can use the get_group method:

In [21]: gb.get_group('foo')
Out[21]: 
     A         B   C
0  foo  1.624345   5
2  foo -0.528172  11
4  foo  0.865408  14

Note: This doesn't require creating an intermediary dictionary / copy of every subdataframe for every group, so will be much more memory-efficient than creating the naive dictionary with dict(iter(gb)). This is because it uses data-structures already available in the groupby object.


You can select different columns using the groupby slicing:

In [22]: gb[["A", "B"]].get_group("foo")
Out[22]:
     A         B
0  foo  1.624345
2  foo -0.528172
4  foo  0.865408

In [23]: gb["C"].get_group("foo")
Out[23]:
0     5
2    11
4    14
Name: C, dtype: int64
Paul
  • 91
  • 1
  • 7
Andy Hayden
  • 359,921
  • 101
  • 625
  • 535
87

Wes McKinney (pandas' author) in Python for Data Analysis provides the following recipe:

groups = dict(list(gb))

which returns a dictionary whose keys are your group labels and whose values are DataFrames, i.e.

groups['foo']

will yield what you are looking for:

     A         B   C
0  foo  1.624345   5
2  foo -0.528172  11
4  foo  0.865408  14
Andy Hayden
  • 359,921
  • 101
  • 625
  • 535
JD Margulici
  • 965
  • 7
  • 8
  • 1
    Thank you, this is very useful. How can I modify the code to make `groups = dict(list(gb))` only store column `C`? Let's say I am not interested in the other columns and therefore do not want to store them. – Zhubarb Jan 14 '14 at 13:39
  • 7
    Answer: `dict(list( df.groupby(['A'])['C'] ))` – Zhubarb Jan 15 '14 at 13:27
  • 5
    Note: it's more efficient (but equivalent) to use `dict(iter(g))`. (although `get_group` is the best way / as it doesn't involve creating a dictionary / keeps you in pandas! :D ) – Andy Hayden Mar 10 '14 at 22:54
  • I wasn't able to use groups(dict(list(gb)) but you can create a dictionary the following way: `gb_dict = {str(indx): str(val) for indx in gb.indx for val in gb.some_key}` and then retrieve value via `gb_dict[some_key]` – user2476665 Mar 18 '16 at 00:20
  • 2
    Just use [`get_group()`](https://pandas.pydata.org/pandas-docs/stable/whatsnew/v0.14.0.html?highlight=get_group) , this recipe has not been needed for years. – smci Nov 12 '19 at 23:58
27

Rather than

gb.get_group('foo')

I prefer using gb.groups

df.loc[gb.groups['foo']]

Because in this way you can choose multiple columns as well. for example:

df.loc[gb.groups['foo'],('A','B')]
LegitMe
  • 560
  • 5
  • 8
8
gb = df.groupby(['A'])

gb_groups = grouped_df.groups

If you are looking for selective groupby objects then, do: gb_groups.keys(), and input desired key into the following key_list..

gb_groups.keys()

key_list = [key1, key2, key3 and so on...]

for key, values in gb_groups.items():
    if key in key_list:
        print(df.ix[values], "\n")
Jongwook Choi
  • 8,171
  • 3
  • 25
  • 22
Surya
  • 11,002
  • 4
  • 57
  • 39
5

I was looking for a way to sample a few members of the GroupBy obj - had to address the posted question to get this done.

create groupby object based on some_key column

grouped = df.groupby('some_key')

pick N dataframes and grab their indices

sampled_df_i  = random.sample(grouped.indices, N)

grab the groups

df_list  = map(lambda df_i: grouped.get_group(df_i), sampled_df_i)

optionally - turn it all back into a single dataframe object

sampled_df = pd.concat(df_list, axis=0, join='outer')
Ste
  • 43
  • 7
meyerson
  • 4,710
  • 1
  • 20
  • 20
2
df.groupby('A').get_group('foo')

is equivalent to:

df[df['A'] == 'foo']
Mykola Zotko
  • 15,583
  • 3
  • 71
  • 73