1

I have a dataframe like this:

Name_A ¦  date1 ¦ 1

Name_A ¦  date2 ¦ 0 

Name_A ¦  date3 ¦ 1

Name_A ¦  date4 ¦ 1

Name_A ¦  date5 ¦ 1

Name_B ¦  date6 ¦ 1

Name_B ¦  date7 ¦ 1

Name_B ¦  date8 ¦ 0

Name_B ¦  date9 ¦ 1

And I would like to get this:

Name_A ¦ date1 ¦ 1  

Name_A ¦ date2 ¦ 0  

Name_A ¦ date3 ¦ 1  

Name_A ¦ date4 ¦ 2 

Name_A ¦ date5 ¦ 3

Name_B ¦ date6 ¦ 1

Name_B ¦ date7 ¦ 2

Name_B ¦ date8 ¦ 0

Name_B ¦ date9 ¦ 1 

Basically I want to get the cumulative sum of consecutive 1s. If the name changes or there's a 0, it should start the counting from 0 again.

Any ideas/suggestions? Thanks.

user3483203
  • 50,081
  • 9
  • 65
  • 94
  • Can you share what you've tried so far? In addition, can you provide some data in a usable format? See [mcve]. – jpp May 19 '18 at 23:00
  • 1
    For the example in your question you could print DataFrame.head(n) in your shell then copy and paste it then format it as code. – wwii May 19 '18 at 23:37

3 Answers3

10

Here's my own take:

In [145]: group_ids = df[2].diff().ne(0).cumsum()

In [146]: df["count"] = df[2].groupby([df[0], group_ids]).cumsum()

In [147]: df
Out[147]: 
        0      1  2  count
0  Name_A  date1  1      1
1  Name_A  date2  0      0
2  Name_A  date3  1      1
3  Name_A  date4  1      2
4  Name_A  date5  1      3
5  Name_B  date6  1      1
6  Name_B  date7  1      2
7  Name_B  date8  0      0
8  Name_B  date9  1      1

This uses the compare-cumsum-groupby pattern to find the contiguous groups, because df[2].diff().ne(0) gives us a True whenever a value isn't the same as the previous, and the cumulative sum of those gives us a new number whenever a new group of 1s starts.

This will mean that we have the same group_id for binary values crossing different names, of course, but since we're grouping on both df[0] (the names) and group_ids, we're okay.

DSM
  • 342,061
  • 65
  • 592
  • 494
1

Here is a vectorized solution requiring no explicit loops:

df = pd.DataFrame.from_dict({'name': list('AAAAABBBB'), 'bit': (1,0,1,1,1,1,1,0,1)})
>>> df
   bit name
0    1    A
1    0    A
2    1    A
3    1    A
4    1    A
5    1    B
6    1    B
7    0    B
8    1    B
>>> reset = (df['bit'] == 0) | (df['name'] != df['name'].shift(1))
>>> reset, = np.where(np.concatenate([reset, [True]]))
>>> df['count'] = np.arange(reset[-1]) + (df['bit'].values[reset[:-1]]-reset[:-1]).repeat(np.diff(reset))
>>> df
   bit name  count
0    1    A      1
1    0    A      0
2    1    A      1
3    1    A      2
4    1    A      3
5    1    B      1
6    1    B      2
7    0    B      0
8    1    B      1
Paul Panzer
  • 51,835
  • 3
  • 54
  • 99
0

I rebuilt your data like this:

import pandas as pd

df = pd.DataFrame(
    {'col1': ['Name_A'] * 5 + ['Name_B'] * 4,
     'col2': ['date{}'.format(x) for x in list(range(1,10,1))],
     'col3': [1,0,1,1,1,1,1,0,1]})

For the kind of grouping you're suggesting, I like using itertools.groupby rather than pd.groupby, that way I can explicitly state the two conditions that you specified (name change and 0 in value column):

from itertools import groupby

groups = []
uniquekeys = []
for k, g in groupby(df.iterrows(), 
                    lambda row: (row[1]['col1'], row[1]['col3'] == 0)):
    groups.append(list(g))
    uniquekeys.append(k)

Now that the correct groups exist, all that remains is to iterate over then an calculate the cumulative sum:

cumsum = pd.concat([pd.Series([y[1]['col3'] for y in x]).cumsum() for x in groups])

df['cumsum'] = list(cumsum)

Result:

    col1    col2    col3    cumsum
0   Name_A  date1   1       1
1   Name_A  date2   0       0
2   Name_A  date3   1       1
3   Name_A  date4   1       2
4   Name_A  date5   1       3
5   Name_B  date6   1       1
6   Name_B  date7   1       2
7   Name_B  date8   0       0
8   Name_B  date9   1       1

For reference, see nice explanation about itertools.groupby here.

Ido S
  • 1,304
  • 10
  • 11