Using data from From the Zillow research data site mainly city level. the data structure is 6 columns contain city related information and the remaining 245 columns contain the monthly sale price. I've used the code below to display a sample of the data
import pandas as pd
from tabulate import tabulate
df = pd.read_csv("City_Zhvi_AllHomes.csv")
c = df.columns.tolist()
cols = c[:7]
cols.append(c[-1])
print (tabulate(df[cols].iloc[23:29], headers = 'keys', tablefmt = 'orgtbl'))
The above code will print a sample as shown below:
| | RegionID | RegionName | State | Metro | CountyName | SizeRank | 1996-04 | 2016-08 |
|----+------------+---------------+---------+---------------+--------------+------------+-----------+-----------|
| 23 | 5976 | Milwaukee | WI | Milwaukee | Milwaukee | 24 | 68100 | 99500 |
| 24 | 7481 | Tucson | AZ | Tucson | Pima | 25 | 91500 | 153000 |
| 25 | 13373 | Portland | OR | Portland | Multnomah | 26 | 121100 | 390500 |
| 26 | 33225 | Oklahoma City | OK | Oklahoma City | Oklahoma | 27 | 64900 | 130500 |
| 27 | 40152 | Omaha | NE | Omaha | Douglas | 28 | 88900 | 143800 |
| 28 | 23429 | Albuquerque | NM | Albuquerque | Bernalillo | 29 | 115400 | 172000 |
Part of df
is a time series, the trick here is to separate the time dependant columns from the rest, use pandas
resample
and to_datetime
Suppose we are only interested in summarising the sales for the years 1998-2000
That will enable us to select the columns
# seperate time columns and convert their names to datetime
tdf = df[df.columns[6:]].rename(columns=pd.to_datetime)
# find the columns in the period 1998-2000
cols = tdf.columns
sel_cols = cols[(cols > '1997-12-31') & (cols < '2000')]
# select the columns, resample on columns
# calculate the mean
# rename the columns the way we like
mdf = tdf[sel_cols].resample('6M',axis=1).mean().rename(
columns=lambda x: '{:}${:}'.format(x.year, [1, 2][x.quarter > 2]))
# reattach non-time columns
mdf[df.columns[:6]] = df[df.columns[:6]]
print (tabulate(mdf[mdf.columns[0:9]].iloc[
23:29], headers='keys', tablefmt='orgtbl'))
The above code will print a sample as shown below:
| | 1998$1 | 1998$2 | 1999$1 | 1999$2 | 2000$1 | RegionID | RegionName | State | Metro |
|----+----------+----------+----------+----------+----------+------------+---------------+---------+---------------|
| 23 | 71900 | 72483.3 | 72616.7 | 74266.7 | 75920 | 5976 | Milwaukee | WI | Milwaukee |
| 24 | 94200 | 95133.3 | 96533.3 | 99100 | 100600 | 7481 | Tucson | AZ | Tucson |
| 25 | 139000 | 141900 | 145233 | 148900 | 151980 | 13373 | Portland | OR | Portland |
| 26 | 68500 | 69616.7 | 72016.7 | 73616.7 | 74900 | 33225 | Oklahoma City | OK | Oklahoma City |
| 27 | 98200 | 99250 | 103367 | 109083 | 112160 | 40152 | Omaha | NE | Omaha |
| 28 | 121000 | 122050 | 122833 | 123633 | 124420 | 23429 | Albuquerque | NM | Albuquerque |
The question is:
The last column from the resample result, has the year "2000" despite the selection using <'2000', why?
EDIT: Just for fun, I include a more "pandorable" method of doing the above
import pandas as pd
housing = pd.read_csv('City_Zhvi_AllHomes.csv',
index_col=list(range(6))).filter(
regex='199[8-9]-[0-1][0-9]').rename(
columns=pd.to_datetime).resample('2Q',
closed='left',axis=1).mean().rename(
columns=lambda x: str(x.to_period('2Q')).replace(
'Q','$').replace('2','1').replace('4','2')).reset_index()
This delivers the desired outcome, printout of housing.iloc[23:27,4:]
is shown below
| | CountyName | SizeRank | 1998$1 | 1998$2 | 1999$1 | 1999$2 |
|----+--------------+------------+----------+----------+----------+----------|
| 23 | Milwaukee | 24 | 72366.7 | 72583.3 | 73916.7 | 75750 |
| 24 | Pima | 25 | 94883.3 | 96183.3 | 98783.3 | 100450 |
| 25 | Multnomah | 26 | 141167 | 144733 | 148183 | 151767 |
| 26 | Oklahoma | 27 | 69300 | 71550 | 73466.7 | 74766.7 |