I am attempting to setup a databricks autoloader stream to read a large amount of csv files, however I get the error
Found invalid character(s) among " ,;{}()\n\t=" in the column names of your schema.
due to the .csv column names containing spaces. The message suggests enabling column mapping by setting table property 'delta.columnMapping.mode' to 'name'
and refers me to this docs page, however I cannot see a way to implement this.
This is the code for setting up the stream:
stream = spark.readStream.format("cloudFiles")\
.option('cloudFiles.format', 'csv')\
.option('cloudFiles.schemaLocation', delta_loc)\
.option("rescuedDataColumn", "_rescued_data")\
.option('header', 'true')\
.option('delimiter', '|')\
.option('pathGlobFilter', f"*{file_code}*.csv")\
.load(data_path)