I am trying to analyze average daily fluctuations in a measurement "X" over several weeks using pandas dataframes, however timestamps/datetimes etc. are proving particularly hellish to deal with. Having spent a good few hours trying to work this out my code is getting messier and messier and I don't think I'm any closer to a solution, hoping someone here can guide me in the right direction.
I have measured X at different times and on different days, saving the daily results to a dataframe which has the form:
Timestamp(datetime64) X
0 2015-10-05 00:01:38 1
1 2015-10-05 06:03:39 4
2 2015-10-05 13:42:39 3
3 2015-10-05 22:15:39 2
As the time the measurement is made at changes from day to day I decided to use binning to organize the data, and then work out averages and STD for each bin which I can then plot. My idea was to create a final dataframe with bins and the average value of X for the measurements, the 'Observations' column is just to aid understanding:
Time Bin Observations <X>
0 00:00-05:59 [ 1 , ...] 2.3
1 06:00-11:59 [ 4 , ...] 4.6
2 12:00-17:59 [ 3 , ...] 8.5
3 18:00-23:59 [ 2 , ...] 3.1
However I've run into difficulties with incompatibility between time, datetime, datetime64, timedelta and binning using pd.cut
and pd.groupby
, basically I feel like I'm making stabs in the dark with no idea as to the the 'right' way to approach this problem. The only solution I can think of is a row-by-row iteration through the dataframe but I'd really like to avoid having to do this.