6

Consider the following series, ser

date        id 
2000        NaN
2001        NaN 
2001        1
2002        1
2000        2
2001        2
2002        2
2001        NaN
2010        NaN
2000        1
2001        1
2002        1
2010        NaN

How to count the values such that every consecutive number is counted and returned? Thanks.

Count
NaN     2 
1       2 
2       3
NaN     2
1       3
NaN     1
aschultz
  • 1,658
  • 3
  • 20
  • 30
Oli
  • 1,313
  • 14
  • 31

2 Answers2

5

The cumsum trick is useful here, it's a little tricky with the NaNs though, so I think you need to handle these separately:

In [11]: df.id.isnull() & df.id.shift(-1).isnull()
Out[11]:
0      True
1     False
2     False
3     False
4     False
5     False
6     False
7      True
8     False
9     False
10    False
11    False
12     True
Name: id, dtype: bool

In [12]: df.id.eq(df.id.shift(-1))
Out[12]:
0     False
1     False
2      True
3     False
4      True
5      True
6     False
7     False
8     False
9      True
10     True
11    False
12    False
Name: id, dtype: bool

In [13]: (df.id.isnull() & df.id.shift(-1).isnull()) | (df.id.eq(df.id.shift(-1)))
Out[13]:
0      True
1     False
2      True
3     False
4      True
5      True
6     False
7      True
8     False
9      True
10     True
11    False
12     True
Name: id, dtype: bool

In [14]: ((df.id.isnull() & df.id.shift(-1).isnull()) | (df.id.eq(df.id.shift(-1)))).cumsum()
Out[14]:
0     1
1     1
2     2
3     2
4     3
5     4
6     4
7     5
8     5
9     6
10    7
11    7
12    8
Name: id, dtype: int64

Now you can use this labeling in your groupby:

In [15]: g = df.groupby(((df.id.isnull() & df.id.shift(-1).isnull()) | (df.id.eq(df.id.shift(-1)))).cumsum())

In [16]: pd.DataFrame({"count": g.id.size(), "id": g.id.nth(0)})
Out[16]:
    count   id
id
1       2  NaN
2       2  1.0
3       1  2.0
4       2  2.0
5       2  NaN
6       1  1.0
7       2  1.0
8       1  NaN
Andy Hayden
  • 359,921
  • 101
  • 625
  • 535
  • 1
    output doesn't match the question's output. – Oli Jul 30 '19 at 07:50
  • 1
    @Oli true, but very close and specifically the OPs output is kinda bad since it has duplicates (and NaN) in the index. With a `reset_index()` it would be the same... – Andy Hayden Jul 30 '19 at 07:56
  • 2
    Agree with Andy here: a unique index has [clear performance benefits](https://stackoverflow.com/a/16629125/9209546). Aside, it makes more sense as an "index". – jpp Jul 30 '19 at 08:21
5

Here is another approach using fillna to handle NaN values:

s = df.id.fillna('nan')
mask = s.ne(s.shift())

ids = s[mask].to_numpy()
counts = s.groupby(mask.cumsum()).cumcount().add(1).groupby(mask.cumsum()).max().to_numpy()

# Convert 'nan' string back to `NaN`
ids[ids == 'nan'] = np.nan
ser_out = pd.Series(counts, index=ids, name='counts')

[out]

nan    2
1.0    2
2.0    3
nan    2
1.0    3
nan    1
Name: counts, dtype: int64
Chris Adams
  • 18,389
  • 4
  • 22
  • 39
  • Can we replace string `nan` back to double `nan` at the end – Oli Jul 30 '19 at 08:08
  • 1
    If output is assigned to a new variable `ser_out`, something like `ser_out.index = ser_out.index.where(ser_out.index != 'nan')` maybe..? Or better, `ids[ids == 'nan'] = np.nan` before the `Series` contructor – Chris Adams Jul 30 '19 at 08:11