280

I have a DataFrame with many missing values in columns which I wish to groupby:

import pandas as pd
import numpy as np
df = pd.DataFrame({'a': ['1', '2', '3'], 'b': ['4', np.NaN, '6']})

In [4]: df.groupby('b').groups
Out[4]: {'4': [0], '6': [2]}

see that Pandas has dropped the rows with NaN target values. (I want to include these rows!)

Since I need many such operations (many cols have missing values), and use more complicated functions than just medians (typically random forests), I want to avoid writing too complicated pieces of code.

Any suggestions? Should I write a function for this or is there a simple solution?

cs95
  • 379,657
  • 97
  • 704
  • 746
Gyula Sámuel Karli
  • 3,118
  • 2
  • 15
  • 18
  • 1
    @PhillipCloud I've edited this question to include just the question, which is actual quite good, relating to [open pandas enhancement](https://github.com/pydata/pandas/issues/3729) of Jeff's. – Andy Hayden Aug 25 '13 at 17:13
  • 2
    There not being able to include (and propagate) NaNs in groups is quite aggravating. Citing R is not convincing, as this behavior is not consistent with a lot of other things. Anyway, the dummy hack is also pretty bad. However, the size (includes NaNs) and the count (ignores NaNs) of a group will differ if there are NaNs. dfgrouped = df.groupby(['b']).a.agg(['sum','size','count']) dfgrouped['sum'][dfgrouped['size']!=dfgrouped['count']] = None –  May 05 '17 at 16:57
  • Can you summarize what you are specifically trying to achieve? i.e. we see an output, but what is the "desired" output? – c-a Aug 12 '17 at 18:24
  • 14
    With pandas 1.1 you will soon be able to specify `dropna=False` in `groupby()` to get your desired result. [More info](https://pandas.pydata.org/pandas-docs/dev/whatsnew/v1.1.0.html#allow-na-in-groupby-key) – cs95 May 20 '20 at 21:18
  • 1
    Note that as of this writing, there is a bug that makes `dropna=False` fail with MultiIndex grouping. There are a handful of open issues mentioning this on their github and not a lot of momentum on fixing it yet unfortunately. – totalhack Apr 11 '21 at 22:09
  • More detail regarding the MultiIndex bug mentioned by @totalhack: still not yet fixed as of pandas 1.5. Closest issue is this one, which was closed as a duplicate of some more obscure issues, which are in progress but still open: https://github.com/pandas-dev/pandas/issues/36470 – Neil Traft Jan 03 '23 at 22:43

7 Answers7

336

pandas >= 1.1

From pandas 1.1 you have better control over this behavior, NA values are now allowed in the grouper using dropna=False:

pd.__version__
# '1.1.0.dev0+2004.g8d10bfb6f'

# Example from the docs
df

   a    b  c
0  1  2.0  3
1  1  NaN  4
2  2  1.0  3
3  1  2.0  2

# without NA (the default)
df.groupby('b').sum()

     a  c
b        
1.0  2  3
2.0  2  5
# with NA
df.groupby('b', dropna=False).sum()

     a  c
b        
1.0  2  3
2.0  2  5
NaN  1  4
Brendan
  • 1,905
  • 2
  • 19
  • 25
cs95
  • 379,657
  • 97
  • 704
  • 746
  • 32
    Hopefully this answer makes a gradual march up to the top. It's the correct approach. – kdbanman Jun 01 '20 at 18:53
  • 1
    It doesn't work to me. `kroscek_jupyter_metabase = fromdb_1474_detail.groupby(groupby, dropna = False)[col_to_count].count()` returns `TypeError: groupby() got an unexpected keyword argument 'dropna'` – Cignitor Jan 23 '21 at 07:11
  • @Cignitor please run print(pd.__version__) and let me know what it says. – cs95 Jan 23 '21 at 07:23
  • 6
    This fails with MultiIndex grouping unfortunately. The most straightforward workaround I've seen so far, albeit ugly, appears to be replacing the NaN value before grouping. – totalhack Apr 11 '21 at 22:21
  • I wound like None to be in the same group! – Ievgen Jun 03 '21 at 20:21
  • Logically they aren’t the same thing though. @IevgenNaida you should ideally want to have only one way of representing missing data. Anything else is confusing, brittle, error prone – cs95 Jun 05 '21 at 18:06
  • @cs95 Sorry, my comment is misleading. I just need my None to be a grouped and dropna=False is perfectly doing the job. – Ievgen Jun 06 '21 at 21:23
177

This is mentioned in the Missing Data section of the docs:

NA groups in GroupBy are automatically excluded. This behavior is consistent with R

One workaround is to use a placeholder before doing the groupby (e.g. -1):

In [11]: df.fillna(-1)
Out[11]: 
   a   b
0  1   4
1  2  -1
2  3   6

In [12]: df.fillna(-1).groupby('b').sum()
Out[12]: 
    a
b    
-1  2
4   1
6   3

That said, this feels pretty awful hack... perhaps there should be an option to include NaN in groupby (see this github issue - which uses the same placeholder hack).

However, as described in another answer, "from pandas 1.1 you have better control over this behavior, NA values are now allowed in the grouper using dropna=False"

cs95
  • 379,657
  • 97
  • 704
  • 746
Andy Hayden
  • 359,921
  • 101
  • 625
  • 535
  • 4
    This is a logical but a sort of funny solution that I've thought of earlier, Pandas makes NaN fields from the empty ones, and we have to change them back. This is the reason that I'm thinking of looking for other solutions like running an SQL server and querying the tables from there (looks a bit too complicated), or looking another library in spite of Pandas, or use my own (that I want to get rid of). Thx – Gyula Sámuel Karli Aug 26 '13 at 20:52
  • @GyulaSámuelKarli To me this seems a small bug (see the bugreport above), and my solution is a workaround. I find it strange you write off the entire library. – Andy Hayden Aug 26 '13 at 21:02
  • 1
    I don't want to write down Pandas just look for the tool that fits my requests the most. – Gyula Sámuel Karli Aug 26 '13 at 21:08
  • But what if you do not want to change the NaNs with different values? Is there no way to use the `sum` method incorporating NaNs? (for instance, like the dataframe sum method `df.sum(skipna=True`): http://pandas.pydata.org/pandas-docs/version/0.17.1/generated/pandas.DataFrame.sum.html – Guido Jan 12 '16 at 10:08
  • @Guido This question is about the groupby key being NaN, so I'm not sure I follow the question. – Andy Hayden Jan 12 '16 at 17:40
  • You are right. I was looking at the target values, which can be NaN too. If they are NaN, they cannot be summed using the proposed method. But I guess that is out of scope for this thread. – Guido Jan 13 '16 at 11:56
  • Also, do not forget to reassign the df back to another/same variable. `new_df = df.replace(np.nan, -1).groupby('b').sum()` – van_d39 Nov 30 '16 at 19:12
  • 1
    Have a look at my answer below, I believe I have found a pretty good (cleaner, and probably faster) solution. http://stackoverflow.com/a/43375020/408853 – c-a Apr 12 '17 at 16:33
  • Best answer is in the comment to @Tuetschek answer – Yuval Atzmon May 18 '17 at 07:50
  • @yuval thanks, I updated that part, it's interesting that .replace(np.nan, x) behavior would have changed! – Andy Hayden May 18 '17 at 13:39
  • @AndyHayden Thank you so much for saving me the rest of my day, this issue was driving me crazy. – gaborous Dec 29 '18 at 01:51
  • 4
    No, this is not consistent with R. df %>% group_by will give NA summaries too with a warning which can be avoided by passing the grouping column through fct_explicit_na and then a (Missing) level is created. – Ravaging Care Aug 16 '19 at 13:33
  • Agree with Ravaging Care. This is not consistent with R. Data manipulation packages like dplyr & data.table in R default to include NA's when grouping. Excluding NA's when grouping, like Pandas does, seems counter-intuitive to me: even in SQL NULLs are included when grouping. – Fierr Oct 22 '19 at 19:36
  • "One workaround is to use a placeholder before doing the groupby (e.g. -1):". It turns out that -1 is treated like an NA value, at least in pandas version 1.4.2. – Paul Rougieux May 10 '23 at 15:57
52

Ancient topic, if someone still stumbles over this--another workaround is to convert via .astype(str) to string before grouping. That will conserve the NaN's.

df = pd.DataFrame({'a': ['1', '2', '3'], 'b': ['4', np.NaN, '6']})
df['b'] = df['b'].astype(str)
df.groupby(['b']).sum()
    a
b   
4   1
6   3
nan 2
cs95
  • 379,657
  • 97
  • 704
  • 746
M. Kiewisch
  • 553
  • 4
  • 2
  • @K3---rnc: See the comment to your link - the author of the post in your link did something wrong. – Thomas Jun 06 '18 at 16:31
  • @Thomas, yes, exactly as in the example above. Please edit if you can make the example safe (and as trivial). – K3---rnc Jun 06 '18 at 22:21
  • 3
    The `sum` of `a` is string concatenation here, not a numeric sum. This only "works" because 'b' consisted of distinct entries. You need 'a' to be numeric and 'b' to be string – BallpointBen Feb 27 '19 at 16:38
  • 1
    Note the column `a` is an object and the resultant mean after `groupby` may not be the thing you want!!!!!!!!! – Jia Gao Feb 27 '22 at 09:09
13

I am not able to add a comment to M. Kiewisch since I do not have enough reputation points (only have 41 but need more than 50 to comment).

Anyway, just want to point out that M. Kiewisch solution does not work as is and may need more tweaking. Consider for example

>>> df = pd.DataFrame({'a': [1, 2, 3, 5], 'b': [4, np.NaN, 6, 4]})
>>> df
   a    b
0  1  4.0
1  2  NaN
2  3  6.0
3  5  4.0
>>> df.groupby(['b']).sum()
     a
b
4.0  6
6.0  3
>>> df.astype(str).groupby(['b']).sum()
      a
b
4.0  15
6.0   3
nan   2

which shows that for group b=4.0, the corresponding value is 15 instead of 6. Here it is just concatenating 1 and 5 as strings instead of adding it as numbers.

Kamaraju Kusumanchi
  • 1,809
  • 19
  • 12
  • 14
    That's because you converted the entire DF to str, instead of just the `b` column – Korem Aug 04 '17 at 08:31
  • Note that this have been fixed in the mentioned answer now. – Shaido Aug 21 '19 at 02:02
  • 1
    The new solution is better but still not safe, in my opinion. Consider a case where one of the entries in column 'b' is same as stringified np.NaN. Then those things are clubbed together. df = pd.DataFrame({'a': [1, 2, 3, 5, 6], 'b': ['foo', np.NaN, 'bar', 'foo', 'nan']}); df['b'] = df['b'].astype(str); df.groupby(['b']).sum() – Kamaraju Kusumanchi Aug 21 '19 at 15:14
7

All answers provided thus far result in potentially dangerous behavior as it is quite possible you select a dummy value that is actually part of the dataset. This is increasingly likely as you create groups with many attributes. Simply put, the approach doesn't always generalize well.

A less hacky solve is to use pd.drop_duplicates() to create a unique index of value combinations each with their own ID, and then group on that id. It is more verbose but does get the job done:

def safe_groupby(df, group_cols, agg_dict):
    # set name of group col to unique value
    group_id = 'group_id'
    while group_id in df.columns:
        group_id += 'x'
    # get final order of columns
    agg_col_order = (group_cols + list(agg_dict.keys()))
    # create unique index of grouped values
    group_idx = df[group_cols].drop_duplicates()
    group_idx[group_id] = np.arange(group_idx.shape[0])
    # merge unique index on dataframe
    df = df.merge(group_idx, on=group_cols)
    # group dataframe on group id and aggregate values
    df_agg = df.groupby(group_id, as_index=True)\
               .agg(agg_dict)
    # merge grouped value index to results of aggregation
    df_agg = group_idx.set_index(group_id).join(df_agg)
    # rename index
    df_agg.index.name = None
    # return reordered columns
    return df_agg[agg_col_order]

Note that you can now simply do the following:

data_block = [np.tile([None, 'A'], 3),
              np.repeat(['B', 'C'], 3),
              [1] * (2 * 3)]

col_names = ['col_a', 'col_b', 'value']

test_df = pd.DataFrame(data_block, index=col_names).T

grouped_df = safe_groupby(test_df, ['col_a', 'col_b'],
                          OrderedDict([('value', 'sum')]))

This will return the successful result without having to worry about overwriting real data that is mistaken as a dummy value.

Grant Langseth
  • 1,527
  • 12
  • 6
  • This is the best solution for the general case, but in cases where I know of an invalid string / number I can use instead, I'm probably going to go with Andy Hayden's answer below... I hope pandas fixes this behavior soon. – Sarah Messer Apr 24 '20 at 21:51
6

One small point to Andy Hayden's solution – it doesn't work (anymore?) because np.nan == np.nan yields False, so the replace function doesn't actually do anything.

What worked for me was this:

df['b'] = df['b'].apply(lambda x: x if not np.isnan(x) else -1)

(At least that's the behavior for Pandas 0.19.2. Sorry to add it as a different answer, I do not have enough reputation to comment.)

Tuetschek
  • 408
  • 5
  • 8
5

I answered this already, but some reason the answer was converted to a comment. Nevertheless, this is the most efficient solution:

Not being able to include (and propagate) NaNs in groups is quite aggravating. Citing R is not convincing, as this behavior is not consistent with a lot of other things. Anyway, the dummy hack is also pretty bad. However, the size (includes NaNs) and the count (ignores NaNs) of a group will differ if there are NaNs.

dfgrouped = df.groupby(['b']).a.agg(['sum','size','count'])

dfgrouped['sum'][dfgrouped['size']!=dfgrouped['count']] = None

When these differ, you can set the value back to None for the result of the aggregation function for that group.

  • 1
    This was super helpful to me but it answers a slightly different question than the original one. IIUC, your solution propagates NaNs in the summation, but the NaN items in the "b" column still get dropped as rows. – Andrew Jun 06 '19 at 15:31