I have a large dataset with more than 500 000 date & time stamps that look like this:
date time
2017-06-25 00:31:53.993
2017-06-25 00:32:31.224
2017-06-25 00:33:11.223
2017-06-25 00:33:53.876
2017-06-25 00:34:31.219
2017-06-25 00:35:12.634
How do I round these timestamps off to the nearest second?
My code looks like this:
readcsv = pd.read_csv(filename)
log_date = readcsv.date
log_time = readcsv.time
readcsv['date'] = pd.to_datetime(readcsv['date']).dt.date
readcsv['time'] = pd.to_datetime(readcsv['time']).dt.time
timestamp = [datetime.datetime.combine(log_date[i],log_time[i]) for i in range(len(log_date))]
So now I have combined the dates and times into a list of datetime.datetime
objects that looks like this:
datetime.datetime(2017,6,25,00,31,53,993000)
datetime.datetime(2017,6,25,00,32,31,224000)
datetime.datetime(2017,6,25,00,33,11,223000)
datetime.datetime(2017,6,25,00,33,53,876000)
datetime.datetime(2017,6,25,00,34,31,219000)
datetime.datetime(2017,6,25,00,35,12,634000)
Where do I go from here?
The df.timestamp.dt.round('1s')
function doesn't seem to be working?
Also when using .split()
I was having issues when the seconds and minutes exceeded 59
Many thanks