try:
df1=spark.read.format("delta").load("/abc")
except Exception as ex:
print(ex)
its giving exception as '/abc' is not a Delta table.;
I want to perform a specific operation when the file is not found. Is there a class already available to catch it separately like FileNoTFoundException - which doesn't seem to work here.