How to identify which kind of exception below renaming columns will give and how to handle it in pyspark:
def rename_columnsName(df, columns): #provide names in dictionary format
if isinstance(columns, dict):
for old_name, new_name in columns.items():
df = df.withColumnRenamed(old_name, new_name)
return df.show()
else:
raise ValueError("'columns' should be a dict, like {'old_name':'new_name', 'old_name_one more':'new_name_1'}")
how to test it by generating a exception with a datasets.