Consider the following data frame,
import pandas as pd
import numpy as np
np.random.seed(666)
dd=pd.DataFrame({'v1': np.random.choice(range(30), 20),
'v2': np.random.choice(pd.date_range(
'5/3/2016', periods=365, freq='D'),
20, replace=False)
})
dd=dd.sort_values('v2')
# v1 v2
#5 4 2016-05-03
#11 14 2016-05-26
#19 12 2016-06-26
#15 8 2016-07-06
#7 27 2016-08-04
#4 9 2016-08-28
#17 5 2016-09-08
#13 16 2016-10-04
#14 14 2016-10-10
#18 18 2016-11-25
#3 6 2016-12-03
#8 19 2016-12-04
#12 1 2016-12-12
#10 28 2017-01-14
#1 2 2017-02-12
#0 12 2017-02-15
#9 28 2017-03-11
#6 29 2017-03-18
#16 7 2017-03-21
#2 13 2017-04-29
I want to create groups that will be based on the following two conditions:
- cumulative sum of
v1 <= 40
- Or time difference of
v2 <= 61
days
In other words, each group must have either sum of 40 v1
or 2 months time. So If 61 days go by but the 40 is not completed, then close the group anyway. If the 40 is completed in say 1 day, again close the group
In the end the flag would be,
dd['expected_flag']=[1, 1, 1, 2, 2, 3, 3, 3, 4, 4, 4, 5, 5, 6, 6, 7, 7, 8, 8, 9]
I have asked a very very similar question in R here but there is a new requirement now (date) that I can't quite get my head around it.
NOTE I will be running this in huge data sets so the more efficient the better
EDIT: I found this question which basically takes care of the first condition but not the date condition
EDIT 2: The 61 days time difference is merely to indicate the time constrain. In reality that constrain will be in minutes
EDIT 3: Using the function provided by @Maarten, I get the following (first 40 rows) where group 1 should also include the first 2 from group 2 (i.e. where v1=6 and v1=6).
Out[330]:
index v2 v1 max_limit group
0 2 2017-04-01 00:00:02 14 335.0 1
1 3 2017-04-01 00:00:03 8 335.0 1
2 13 2017-04-01 00:00:13 11 335.0 1
3 14 2017-04-01 00:00:14 11 335.0 1
4 29 2017-04-01 00:00:29 4 335.0 1
5 44 2017-04-01 00:00:44 16 335.0 1
6 52 2017-04-01 00:00:52 10 335.0 1
7 58 2017-04-01 00:00:58 11 335.0 1
8 65 2017-04-01 00:01:05 15 335.0 1
9 68 2017-04-01 00:01:08 8 335.0 1
10 81 2017-04-01 00:01:21 12 335.0 1
11 98 2017-04-01 00:01:38 9 335.0 1
12 102 2017-04-01 00:01:42 7 335.0 1
13 107 2017-04-01 00:01:47 12 335.0 1
14 113 2017-04-01 00:01:53 6 335.0 1
15 116 2017-04-01 00:01:56 6 335.0 1
16 121 2017-04-01 00:02:01 4 335.0 1
17 128 2017-04-01 00:02:08 16 335.0 1
18 143 2017-04-01 00:02:23 7 335.0 1
19 149 2017-04-01 00:02:29 11 335.0 1
20 163 2017-04-01 00:02:43 4 335.0 1
21 185 2017-04-01 00:03:05 9 335.0 1
22 239 2017-04-01 00:03:59 6 335.0 1
23 242 2017-04-01 00:04:02 13 335.0 1
24 272 2017-04-01 00:04:32 4 335.0 1
25 293 2017-04-01 00:04:53 8 335.0 1
26 301 2017-04-01 00:05:01 10 335.0 1
27 302 2017-04-01 00:05:02 7 335.0 1
28 305 2017-04-01 00:05:05 12 335.0 1
29 323 2017-04-01 00:05:23 5 335.0 1
30 326 2017-04-01 00:05:26 13 335.0 1
31 329 2017-04-01 00:05:29 10 335.0 1
32 365 2017-04-01 00:06:05 10 335.0 1
33 368 2017-04-01 00:06:08 11 335.0 1
34 411 2017-04-01 00:06:51 6 335.0 2
35 439 2017-04-01 00:07:19 6 335.0 2
36 440 2017-04-01 00:07:20 8 335.0 2
37 466 2017-04-01 00:07:46 7 335.0 2
38 475 2017-04-01 00:07:55 4 335.0 2
39 489 2017-04-01 00:08:09 4 335.0 2
So to make it clear, when I sum and calculate the timediff I get,
dd.groupby('group', as_index=False).agg({'v1': 'sum', 'v2': lambda x: max(x)-min(x)})
Out[332]:
# group v1 v2
#0 1 320 00:06:06
#1 2 326 00:07:34
#2 3 330 00:06:53
#...