2

before a come here to ask you I search a lot in the internet and documentations.

My problem is as follow:

I have a dataframe like that:

                   date    dir   vel
0   2006-02-12 17:00:00 181.00  3.92
1   2006-02-12 19:00:00 17.88   5.10
2   2006-02-12 21:00:00 214.75  3.73
3   2006-02-13 00:00:00 165.53  2.16
4   2006-02-13 01:00:00 189.44  2.94
5   2006-02-13 04:00:00 152.88  2.55
6   2006-02-13 05:00:00 188.03  3.73
7   2006-02-13 06:00:00 158.50  1.37
8   2006-02-13 07:00:00 189.44  2.55
9   2006-02-13 08:00:00 152.88  1.37
10  2006-02-13 10:00:00 109.28  0.20
11  2006-02-13 11:00:00 248.50  0.98
12  2006-02-13 12:00:00 26.31   1.96
13  2006-02-13 13:00:00 19.28   6.08
14  2006-02-13 14:00:00 334.28  3.53
15  2006-02-13 15:00:00 338.50  2.75
16  2006-02-13 16:00:00 318.81  3.92
17  2006-02-13 17:00:00 323.03  3.73
18  2006-02-13 21:00:00 62.88   1.76
19  2006-02-13 22:00:00 188.03  2.94

I just need to find the sequences of consecutive dates and drop the sequences of consecutive dates with less than 3 dates of duration. So I would get as result the following dataframe:

                   date    dir   vel
5   2006-02-13 04:00:00 152.88  2.55
6   2006-02-13 05:00:00 188.03  3.73
7   2006-02-13 06:00:00 158.50  1.37
8   2006-02-13 07:00:00 189.44  2.55
9   2006-02-13 08:00:00 152.88  1.37  
10  2006-02-13 10:00:00 109.28  0.20
11  2006-02-13 11:00:00 248.50  0.98
12  2006-02-13 12:00:00 26.31   1.96
13  2006-02-13 13:00:00 19.28   6.08
14  2006-02-13 14:00:00 334.28  3.53
15  2006-02-13 15:00:00 338.50  2.75
16  2006-02-13 16:00:00 318.81  3.92
17  2006-02-13 17:00:00 323.03  3.73

So far I have used the following script (inspired in this anwer:Find group of consecutive dates in Pandas DataFrame)

(obs: The DataFrame name is estreito):

dt = estreito['date']
hour = pd.Timedelta('1H')
in_block = ((dt - dt.shift(-1)).abs() == hour) | (dt.diff() == hour)

filt = estreito.loc[in_block]
breaks = filt['date'].diff() != hour
groups = breaks.cumsum()

for _, frame in filt.groupby(groups):
    print(frame, end='\n\n')

The print output is something like that:

                 date     dir   vel
3 2006-02-13 00:00:00  165.53  2.16
4 2006-02-13 01:00:00  189.44  2.94

                 date     dir   vel
5 2006-02-13 04:00:00  152.88  2.55
6 2006-02-13 05:00:00  188.03  3.73
7 2006-02-13 06:00:00  158.50  1.37
8 2006-02-13 07:00:00  189.44  2.55
9 2006-02-13 08:00:00  152.88  1.37

                  date     dir   vel
10 2006-02-13 10:00:00  109.28  0.20
11 2006-02-13 11:00:00  248.50  0.98
12 2006-02-13 12:00:00   26.31  1.96
13 2006-02-13 13:00:00   19.28  6.08
14 2006-02-13 14:00:00  334.28  3.53
15 2006-02-13 15:00:00  338.50  2.75
16 2006-02-13 16:00:00  318.81  3.92
17 2006-02-13 17:00:00  323.03  3.73

How can I save the output in a new Dataframe filtering the groups with less than 3 consecutive dates of lenght.

There is a different way to do this analysis? Perhaps have an easier way to get the desired result.

Thanks in advance.

1 Answers1

1

We using diff with cumsum create the key

s=df.date.diff().dt.seconds.ne(60*60).cumsum()

Then using transform count for the new key created , and slice the original df

df[s.groupby(s).transform('count').gt(3)]
Out[983]: 
                  date     dir   vel
5  2006-02-13 04:00:00  152.88  2.55
6  2006-02-13 05:00:00  188.03  3.73
7  2006-02-13 06:00:00  158.50  1.37
8  2006-02-13 07:00:00  189.44  2.55
9  2006-02-13 08:00:00  152.88  1.37
10 2006-02-13 10:00:00  109.28  0.20
11 2006-02-13 11:00:00  248.50  0.98
12 2006-02-13 12:00:00   26.31  1.96
13 2006-02-13 13:00:00   19.28  6.08
14 2006-02-13 14:00:00  334.28  3.53
15 2006-02-13 15:00:00  338.50  2.75
16 2006-02-13 16:00:00  318.81  3.92
17 2006-02-13 17:00:00  323.03  3.73
BENY
  • 317,841
  • 20
  • 164
  • 234