I am trying to import tables from multiple pages, and then append each table into one data frame. One of the links does not have a table, which causes the function to not work. Is there a way I can just skip the URLs that result in errors? (There are many more URLs that result in errors that I excluded from this code)
import pandas as pd
urls = ['https://basketball.realgm.com/player/Stephen-Curry/GameLogs/1600',
'https://basketball.realgm.com/player/LeBron-James/GameLogs/250',
'https://basketball.realgm.com/player/Anthony-Edwards/GameLogs/117444',
'https://basketball.realgm.com/player/Jalen-Washington/GameLogs/151233']
def fxGameLogs(URL: list) -> pd.DataFrame:
dfs = [] # empty list
for x in URL:
GameLog_list = pd.read_html(x)
GameLogs = GameLog_list[0]
dfs.append(GameLogs) # append frame to list
return pd.concat(dfs).reset_index(drop=True) # concat frames and return
GameLogs = fxGameLogs(urls)
print(GameLogs)