I'm doing a half-hourly date groupby and apply to calculate daily statistics on my dataset, but it's slow. Is there a way to improve the performance of the following functions? I've read about vectorisation, but am unsure how to implement it.
I've used apply and transform to get the outputs I would like, but it takes about 2-3 seconds for a whole year worth of data, I hope to do it faster as I have a lot data. Anyone can point me in the right direction?
import pandas as pd
import numpy as np
import timeit
# dummy data
date_range = pd.date_range('2017-01-01 00:00', '2018-01-01 00:00', freq='30Min')
df = pd.DataFrame(np.random.randint(2, 20, (date_range.shape[0], 2)), index=date_range, columns=['Electricity', 'Natural Gas'])
print(df.head())
print(df.shape)
t1 = timeit.default_timer()
onhour = df.groupby([pd.Grouper(freq='D')]).apply(lambda x: np.count_nonzero(
x[x > x.quantile(0.05) + x.mean() * .1] >
x.quantile(0.05) + 0.25 * (x.quantile(0.95)-x.quantile(0.05)),
axis=0) / 2)
onhour = pd.DataFrame(
onhour.values.tolist(),
index=onhour.index,
columns=df.columns)
print(f"start_time in {timeit.default_timer() - t1}")
print(onhour.head())
t1 = timeit.default_timer()
onhour = df.groupby([pd.Grouper(freq='D')]).transform(lambda x: np.count_nonzero(
x[x > x.quantile(0.05) + x.mean() * .1] >
x.quantile(0.05) + 0.25 * (x.quantile(0.95)-x.quantile(0.05)),
axis=0) / 2).resample('D').mean()
print(f"start_time in {timeit.default_timer() - t1}")
print(onhour.head())