52

I have a column in a DataFrame with values:

[1, 1, -1, 1, -1, -1]

How can I group them like this?

[1,1] [-1] [1] [-1, -1]
Georgy
  • 12,464
  • 7
  • 65
  • 73
Bryan Fok
  • 3,277
  • 2
  • 31
  • 59
  • 3
    `df = pd.DataFrame({'a': [1, 1, -1, 1, -1, -1, -1, 1, 1, 1, -1]})` is a better testcase, to make sure we catch all groups, not just length-two – smci Jan 06 '20 at 16:40

4 Answers4

71

You can use groupby by custom Series:

df = pd.DataFrame({'a': [1, 1, -1, 1, -1, -1]})
print (df)
   a
0  1
1  1
2 -1
3  1
4 -1
5 -1

print ((df.a != df.a.shift()).cumsum())
0    1
1    1
2    2
3    3
4    4
5    4
Name: a, dtype: int32
for i, g in df.groupby([(df.a != df.a.shift()).cumsum()]):
    print (i)
    print (g)
    print (g.a.tolist())

   a
0  1
1  1
[1, 1]
2
   a
2 -1
[-1]
3
   a
3  1
[1]
4
   a
4 -1
5 -1
[-1, -1]
jezrael
  • 822,522
  • 95
  • 1,334
  • 1,252
  • In case you want to use this solution to `.groupby()` consecutive dates with 1 hour difference, change the condition to `df['date'].diff() != pd.Timedelta('1 hour')` – Eran H. Oct 18 '18 at 16:08
  • https://github.com/pandas-dev/pandas/issues/5494 asks for the same behaviour with the `itertools.groupby()`, but it's `Contributions Welcome, No action on 6 Jul 2018` – XoXo Feb 14 '19 at 20:09
  • 4
    Instead of `==`, there's actually a vectorized `.ne()` function: `df.a.ne(df.a.shift())` – smci Jan 06 '20 at 16:42
19

Using groupby from itertools data from Jez

from itertools import groupby
[ list(group) for key, group in groupby(df.a.values.tolist())]
Out[361]: [[1, 1], [-1], [1], [-1, -1]]
BENY
  • 317,841
  • 20
  • 164
  • 234
  • 2
    this answer is more explicit than the accepted `cumsum()` solution – XoXo Feb 14 '19 at 19:56
  • 1
    from the document: `The operation of groupby() is similar to the uniq filter in Unix. It generates a break or new group every time the value of the key function changes` – XoXo Feb 14 '19 at 20:00
  • 2
    While this is a literal answer to the question, it loses the oft-needed labeling of the group of consecutive values. – Rich Andrews Jul 14 '20 at 22:57
8

Series.diff is another way to mark the group boundaries (a!=a.shift means a.diff!=0):

consecutives = df['a'].diff().ne(0).cumsum()

# 0    1
# 1    1
# 2    2
# 3    3
# 4    4
# 5    4
# Name: a, dtype: int64

And to turn these groups into a Series of lists (see the other answers for a list of lists), aggregate with groupby.agg or groupby.apply:

df['a'].groupby(consecutives).agg(list)

# a
# 1      [1, 1]
# 2        [-1]
# 3         [1]
# 4    [-1, -1]
# Name: a, dtype: object
tdy
  • 36,675
  • 19
  • 86
  • 83
1

If you are dealing with string values:

s = pd.DataFrame(['A','A','A','BB','BB','CC','A','A','BB'], columns=['a'])
string_groups = sum([['%s_%s' % (i,n) for i in g] for n,(k,g) in enumerate(itertools.groupby(s.a))],[])

>>> string_groups 
['A_0', 'A_0', 'A_0', 'BB_1', 'BB_1', 'CC_2', 'A_3', 'A_3', 'BB_4']

grouped = s.groupby(string_groups, sort=False).agg(list)
grouped.index = grouped.index.str.split('_').str[0]

>>> grouped
            a
A   [A, A, A]
BB   [BB, BB]
CC       [CC]
A      [A, A]
BB       [BB]

As a separate function:

def groupby_consec(df, col):
    string_groups = sum([['%s_%s' % (i, n) for i in g]
                         for n, (k, g) in enumerate(itertools.groupby(df[col]))], [])
    return df.groupby(string_groups, sort=False)
hellpanderr
  • 5,581
  • 3
  • 33
  • 43