2

I am looking for a way to create on a daily timeindexed dataframe, a rolling window over last two years, resample it every 5th Day and then run functions on the resampled dataframe.

FYI,In this case I want to run regression y~X (as per the dataframe below).

So the output will be a timeindexed series with Beta values for each day(ignoring first 2 years)

Currently I am using a row based loop, but it is extremely slow

Feel there should be easier way to accomplish this.

Thanks in advance

date_range=pd.date_range('2015-01-01','2019-12-31')

df=pd.DataFrame(np.random.rand(len(date_range),2),index=date_range,columns=['X','y'])

Code I am currently using


def rolling_stats(X,y,years_window=2):

    idx=X.index

    assert len(X)==len(y)

    x_idx=np.isnan(X).argmin()

    y_idx=np.isnan(y).argmin()


    out_dates = []
    out_beta = []
    out_rsq = []
    out_stderr = []

    df=pd.DataFrame(np.nan,columns=['Beta','RSQ','StdErr'],index=idx)

    for date in idx:

        start_date=date-DateOffset(years=years_window)

        date_range=pd.bdate_range(start_date,date,freq='5D')

        try:

            X_reg=X.loc[X.index.isin(date_range)]
            y_reg=y.loc[y.index.isin(date_range)]

            assert len(X_reg)==len(y_reg)

            X_c=sm.add_constant(X_reg)

            model=sm.OLS(y_reg,X_c)

            result=model.fit()

            df.loc[date,'RSQ']=result.rsquared
            df.loc[date,'Beta']=result.params[1]
            df.loc[date,'StdErr']=np.sqrt(result.mse_resid)


        except Exception:        

            df.loc[date,'RSQ']=np.nan
            df.loc[date,'Beta']=np.nan
            df.loc[date,'StdErr']=np.nan

    return df

2 Answers2

1

Doing a rolling.apply with several columns (here X, y) as input and returning 3 outputs is not possible with the implemented methods. The best way is to use the rolling method from piRSquared.

from numpy.lib.stride_tricks import as_strided as stride
import pandas as pd

def roll(df, w, **kwargs):
    v = df.values
    d0, d1 = v.shape
    s0, s1 = v.strides

    a = stride(v, (d0 - (w - 1), w, d1), (s0, s0, s1))

    rolled_df = pd.concat({
        row: pd.DataFrame(values, columns=df.columns)
        for row, values in zip(df.index[w-1:], a) #small difference to get the right date later
    })

    return rolled_df.groupby(level=0, **kwargs)

Then define the function to apply per view to get the result from OLS. So here it is how to do:

def result_tup (df_roll):
    result = sm.OLS( df_roll['y'],
                     df_roll[['one','X']]).fit()
    return ( df_roll.index.get_level_values(0)[-1], result.rsquared, 
             result.params[1], result.mse_resid)

Now what you want is to apply this function over groups with 5days intervals, so you can do:

# input and fix a seed for random
date_range=pd.date_range('2015-01-01','2019-12-31')
np.random.seed(1)
df=pd.DataFrame(np.random.rand(len(date_range),2),index=date_range,columns=['X','y'])

#the two parameters and add the column with a 1 instead of doing it each time with sm.add_constant
years_windows = 2
day_freq = 5
df['one'] = 1

#calculate the length of the window
len_window = len(pd.date_range(pd.Timestamp.today().date() - pd.DateOffset(years=2), 
                               pd.Timestamp.today().date(), freq=f'{day_freq}D'))

# groupby every day_freq rows and do the calculation:
df_res = pd.concat([ pd.DataFrame( roll(dfg, len_window).apply(result_tup).tolist(), 
                                   columns=['date', 'RSQ', 'Beta','StdErr']) 
                 for _, dfg in df.groupby(np.arange(len(df))%day_freq)])\
           .set_index('date').sort_index()
#and apply the np.sqrt on the column:
df_res['StdErr'] = np.sqrt(df_res['StdErr'])

and you get way faster:

                 RSQ        Beta    StdErr
date            
2016-12-31  0.000107    0.010800    0.300927
2017-01-01  0.001380    0.036603    0.291804
2017-01-02  0.000870    -0.030364   0.294584
2017-01-03  0.003308    0.056052    0.280171
2017-01-04  0.005622    -0.081809   0.303257
... ... ... ...
2019-12-27  0.000147    0.012609    0.287182
2019-12-28  0.001144    -0.031921   0.268274
2019-12-29  0.000120    0.010720    0.289787
2019-12-30  0.000280    0.014995    0.278135
2019-12-31  0.018433    0.137605    0.293537
Ben.T
  • 29,160
  • 6
  • 32
  • 54
0

The pandas API has a wealth of convenience functions this kind of task:

                   X         y
2015-01-01  0.649573  0.077779
2015-01-02  0.482643  0.358702
2015-01-03  0.710907  0.269485
2015-01-04  0.807316  0.288014
2015-01-05  0.274537  0.287975
...

First, a rolling average over 365 * 2 rows, where there's one row per day. Then we drop the first two years (which are null). Then we resample to five day periods.

df.rolling(365 * 2).mean().dropna(how='all').resample('5D').mean()
                   X         y
2016-12-30  0.505062  0.492843
2017-01-04  0.503317  0.494553
2017-01-09  0.503280  0.495643
2017-01-14  0.501926  0.495538
2017-01-19  0.499519  0.495316
...
Dave
  • 1,579
  • 14
  • 28