I'm trying to scrape web data, creating 30 data frames. The following code doesn't work:
#import time & pandas
import time
import pandas as pd
franchises = {'atl':'ATL', 'bos':'BOS', 'brk':'BRK', 'chi':'CHI', 'cho':'CHO', 'cle':'CLE', 'dal':'DAL', 'den':'DEN', 'det':'DET', 'gsw':'GSW', 'hou':'HOU', 'ind':'IND', 'lac':'LAC', 'lal':'LAL', 'mem':'MEM', 'mia':'MIA', 'mil':'MIL', 'min':'MIN', 'nop':'NOP', 'nyk':'NYK', 'okc':'OKC', 'orl':'ORL', 'phi':'PHI', 'pho':'PHO', 'por':'POR', 'sac':'SAC', 'sas':'SAS', 'tor':'TOR', 'uta':'UTA', 'was':'WAS'}
#set up custom function to scrape contract dataframes from BB-Ref
def dfscrape(tm_nm):
url = 'https://www.basketball-reference.com/contracts/' + franchises[tm_nm] + '.html'
contracts = pd.read_html(url)[0]
time.sleep(1)
return contracts
dfscrape(tm_nm = 'atl')
The code to assign the URL works. However the dataframe 'contracts' is not always created when I run dfscrape(tm_nm = 'atl'). Additionally, I would like to change the name of "contracts" over each instance of the function so that I have 30 dataframes.
Should I be using a for loop? I can't figure out how to iteratively assign new names to dataframes.