df.write.mode("append").parquet(path)
I'm using this to write parquet files to an S3 location. It seems like in order to write the files, it's also creating a /_temporary
directory and deleting it after use. So I got access denied. Admin on our AWS account doesn't want to grant the code delete permission on that folder.
I proposed to write the files to another folder where delete permission can be granted then copy the files over. But Admin still wants me to write files directly to the destination folder.
Is there certain configuration I can set to ask Pyspark don't do the deletion on the temporary directory?