1

Suppose I have the following DataFrame:

import numpy as np
import pandas as pd
import datetime

index = pd.date_range(start=pd.Timestamp("2020/01/01 08:00"),
             end=pd.Timestamp("2020/04/01 17:00"), freq='5T')

data = {'A': np.random.rand(len(index)),
       'B': np.random.rand(len(index))}

df = pd.DataFrame(data, index=index)

It is easy to access every 8am say with the following command:

eight_am = df.loc[datetime.time(8,0)]

Suppose now I wish to access every 8am and every 9am. One way I could do this is via two masks:

mask1 = (df.index.time == datetime.time(8,0))
mask2 = (df.index.time == datetime.time(9,0))

eight_or_nine = df.loc[mask1 | mask2]

However, my issue comes with wanting to access many different different times of day. Say I wish to specify these times in a list say

times_to_access = [datetime.time(hr, mins) for hr, mins in zip([8,9,13,17],[0,15,35,0])]

It is quite ugly to create a mask variable for each time. Is there a nice way to do this programmatically in a loop, or perhaps there is a way of accessing multiple datetime.time's that I am not seeing?

mch56
  • 742
  • 6
  • 24

1 Answers1

1

Use np.in1d with boolean indexing:

df = df[np.in1d(df.index.time, times_to_access)]
print (df)
                            A         B
2020-01-01 08:00:00  0.904687  0.922797
2020-01-01 09:15:00  0.467908  0.457840
2020-01-01 13:35:00  0.747596  0.534620
2020-01-01 17:00:00  0.559217  0.283298
2020-01-02 08:00:00  0.546884  0.361523
                      ...       ...
2020-03-31 17:00:00  0.541345  0.289005
2020-04-01 08:00:00  0.734592  0.137986
2020-04-01 09:15:00  0.108603  0.955305
2020-04-01 13:35:00  0.109969  0.187756
2020-04-01 17:00:00  0.222852  0.125966

[368 rows x 2 columns]

Pandas only solution is possible with convert index to Series is possible, but I think slowier if large DataFrame:

df = df[df.index.to_series().dt.time.isin(times_to_access)]
jezrael
  • 822,522
  • 95
  • 1,334
  • 1,252
  • 1
    Thanks, this works. Though it's not the fastest when the size of the DataFrame is large, which I find odd as I thought `numpy` implementations were quick – mch56 Apr 24 '20 at 09:14