I have seen a few posts about this already but have not been able to find a straightforward answer.
I have a fairly basic loop. It runs some SQL for each table name in a list and sends that output to a csv file. With a few thousand tables in the database, there are a few that are just massive, and the query takes forever. In the interest of getting on with life (and since this data isn't super important, I would like my loop to skip an iteration if the time takes longer than a minute.
Here is my loop:
for t in tablelist:
df = pd.read_sql(sql=f''' select * from [DB].[SCHEMA].[{t}] ''', con=conn)
df.to_csv(path, index=None)