I am saving a spark dataframe to S3 bucket. The default storage type for the saved file is STANDARD. I need it to be STANDARD_IA. What is the option to achieve this. I have looked into the spark source codes and found no such options for spark DataFrameWriter in https://github.com/apache/spark/blob/branch-2.1/sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala
Below is the code I am using to write to S3:
val df = spark.sql(<sql>)
df.coalesce(1).write.mode("overwrite").parquet(<s3path>)
Edit: I am now using CopyObjectRequest to change the storage type of the created parquet:
val copyObjectRequest = new CopyObjectRequest(bucket, key, bucket, key).withStorageClass(<storageClass>)
s3Client.copyObject(copyObjectRequest)