I have a df
with over hundreds of millions of rows.
latitude longitude time VAL
0 -39.20000076293945312500 140.80000305175781250000 1972-01-19 13:00:00 1.20000004768371582031
1 -39.20000076293945312500 140.80000305175781250000 1972-01-20 13:00:00 0.89999997615814208984
2 -39.20000076293945312500 140.80000305175781250000 1972-01-21 13:00:00 1.50000000000000000000
3 -39.20000076293945312500 140.80000305175781250000 1972-01-22 13:00:00 1.60000002384185791016
4 -39.20000076293945312500 140.80000305175781250000 1972-01-23 13:00:00 1.20000004768371582031
... ...
It contains a time
column with the type of datetime64
in UTC. The following code is to create a new column isInDST
to indicate if the time
is in daylight saving period in a local time zone.
df['isInDST'] = pd.DatetimeIndex(df['time']).tz_localize('UTC').tz_convert('Australia/Victoria').map(lambda x : x.dst().total_seconds()!=0)
It takes about 400 seconds to process 15,223,160 rows.
Is there a better approach to achieve this with better performance? Is vectorize
a better way?