schema = <Schema of excel file>
df = spark.read.format("com.crealytics.spark.excel").\
option("useHeader", "true").\
option("mode", "FAILFAST"). \
schema(schema).\
option("dataAddress", "Sheet1"). \
load("C:\\Users\\ABC\\Downloads\\Input.xlsx")
df.show()
Above pyspark read excel dataframe snippet is not failing/throwing runtime exception while reading (calling action using show() ) from incorrect/corrupt data. However option("mode", "FAILFAST") is working fine for CSV but when I am using com.crealytics.spark.excel jar I am facing issue i.e. its not failing code and giving results by substracting incorrect/corrupt data.
Does anyone encountered same issue ?
Thanks in advance!