I have daily S&P 500 prices and Treasury yields. The end goal is to determine how Treasuries perform, graphically and mathematically, during corrections in the S&P. A correction is a decline of some % off the last peak, with the % being a mutable parameter.
import urllib2, pandas as pd, numpy as np, matplotlib.pyplot as plt, scipy as sp
correction = 0.1 # define % decline from peak to constitute market correction
sp_data = urllib2.urlopen('http://real-chart.finance.yahoo.com/table.csv?s=%5EGSPC&a=00&b=3&c=1950&d=00&e=14&f=2016&g=d&ignore=.csv')
df1 = pd.read_csv(sp_data)
df1 = df1[['Date','Close']]
df1 = df1.rename(columns = {'Close':'S&P_500'})
t_bill_data = urllib2.urlopen('http://real-chart.finance.yahoo.com/table.csv?s=%5ETNX&a=00&b=2&c=1962&d=00&e=14&f=2016&g=d&ignore=.csv')
df2 = pd.read_csv(t_bill_data)
df2 = df2[['Date','Close']]
df2 = df2.rename(columns = {'Close':'T_Bill'})
df3 = pd.merge(df1, df2, on='Date', how='outer')
df3['Date'] = pd.to_datetime(df3['Date'], format='%Y-%m-%d')
df3 = df3.set_index('Date')
df3.describe()
df3.plot(kind='line',title='S&P 500 vs. 10 yr T-Bill',subplots=True)
How can I identify and subset the df into distinct periods of S&P corrections? (Allowing the graph plot and summary statistics to focus on unique time periods. So I can determine a correlation between S&P corrections and Treasuries.) Scipy has tools for identifying global or local minima and maxima -- is there a pythonic method to tailor these to identify periods of correction?