0

I'm trying to import data from multiple web pages into a data table using Python. Basically, I'm trying to download attendance data for certain teams since 2000.

Here is what I have so far:

import requests
import pandas as pd
import numpy as np

#What is the effect of a rival team's performance on a team's attendance

Teams = ['LAA', 'LAD', 'NYY', 'NYM', 'CHC', 'CHW', 'OAK', 'SFG']
Years = []
for year in range(2000,2020):
    Years.append(str(year))

bbattend = pd.DataFrame(columns=['GM_Num','Date','Team','Home','Opp','W/L','R','RA','Inn','W-L','Rank','GB','Time','D/N','Attendance','Streak','Game_Win','Wins','Losses','Net_Wins'])

for team in Teams:
    for year in Years:
        url = 'https://www.baseball-reference.com/teams/' + team + '/' + year +'-schedule-scores.shtml'
        html = requests.get(url).content
        df_list = pd.read_html(html)
        df = df_list[-1]

        #Formatting data table
        df.rename(columns={"Gm#": "GM_Num", "Unnamed: 4": "Home", "Tm": "Team", "D/N": "Night"}, inplace = True)
        df['Home'] = df['Home'].apply(lambda x: 0 if x == '@' else 1)
        df['Game_Win'] = df['W/L'].astype(str).str[0]
        df['Game_Win'] = df['Game_Win'].apply(lambda x: 0 if x == 'L' else 1)
        df['Night'] = df['Night'].apply(lambda x: 1 if x == 'N' else 0)
        df['Streak'] = df['Streak'].apply(lambda x: -1*len(x) if '-' in x else len(x))
        df.drop('Unnamed: 2', axis=1, inplace = True)
        df.drop('Orig. Scheduled', axis=1, inplace = True)
        df.drop('Win', axis=1, inplace = True)
        df.drop('Loss', axis=1, inplace = True)
        df.drop('Save', axis=1, inplace = True)
        #Drop rows that do not have data
        df = df[df['GM_Num'].str.isdigit()]
        WL = df["W-L"].str.split("-", n = 1, expand = True)
        df["Wins"] = WL[0].astype(dtype=np.int64)
        df["Losses"] = WL[1].astype(dtype=np.int64)
        df['Net_Wins'] = df['Wins'] - df['Losses']
        bbattend.append(df)

bbattend

When I do the thing in the loop separately by using a specific link instead of trying to use concatenation to make the url, it seems to work.

However, using this code, I am getting the error:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-77-997e6aeea77e> in <module>
     16         url = 'https://www.baseball-reference.com/teams/' + team + '/' + year +'-schedule-scores.shtml'
     17         html = requests.get(url).content
---> 18         df_list = pd.read_html(html)
     19         df = df_list[-1]
     20         #Formatting data table

~/anaconda3/lib/python3.7/site-packages/pandas/io/html.py in read_html(io, match, flavor, header, index_col, skiprows, attrs, parse_dates, tupleize_cols, thousands, encoding, decimal, converters, na_values, keep_default_na, displayed_only)
   1092                   decimal=decimal, converters=converters, na_values=na_values,
   1093                   keep_default_na=keep_default_na,
-> 1094                   displayed_only=displayed_only)

~/anaconda3/lib/python3.7/site-packages/pandas/io/html.py in _parse(flavor, io, match, attrs, encoding, displayed_only, **kwargs)
    914             break
    915     else:
--> 916         raise_with_traceback(retained)
    917 
    918     ret = []

~/anaconda3/lib/python3.7/site-packages/pandas/compat/__init__.py in raise_with_traceback(exc, traceback)
    418         if traceback == Ellipsis:
    419             _, _, traceback = sys.exc_info()
--> 420         raise exc.with_traceback(traceback)
    421 else:
    422     # this version of raise is a syntax error in Python 3

ValueError: No tables found

I don't really understand what the error message is saying. I'd appreciate any help!

Laurel
  • 47
  • 5
  • One or more of the urls does not have a table you can try using `try:` and `except:` One example of this is the url `https://www.baseball-reference.com/teams/LAA/2000-schedule-scores.shtml` returns a 404 so there is no table on that page – It_is_Chris Nov 09 '19 at 18:44
  • Thank you! I didn't realize that – Laurel Nov 09 '19 at 18:49
  • [Never call DataFrame.append or pd.concat inside a for-loop. It leads to quadratic copying.](https://stackoverflow.com/a/36489724/1422451) – Parfait Nov 09 '19 at 19:05

1 Answers1

0

Because do not have any table in some page, e.g., this page and this page

So, df_list = pd.read_html(html) will raise ValueError: No tables found.

You should need use try-except in here.

quan-ng
  • 84
  • 4