23

I use dynamic frames to write a parquet file in S3 but if a file already exists my program append a new file instead of replace it. The sentence that I use is this:

glueContext.write_dynamic_frame.from_options(frame = table,
                                         connection_type = "s3",
                                         connection_options = {"path": output_dir,
                                                               "partitionKeys": ["var1","var2"]},
                                         format = "parquet")

Is there anything like "mode":"overwrite" that replace my parquet files?

Mateo Rod
  • 544
  • 2
  • 6
  • 14

3 Answers3

30

Currently AWS Glue doesn't support 'overwrite' mode but they are working on this feature.

As a workaround you can convert DynamicFrame object to spark's DataFrame and write it using spark instead of Glue:

table.toDF()
  .write
  .mode("overwrite")
  .format("parquet")
  .partitionBy("var_1", "var_2")
  .save(output_dir)
Yuriy Bondaruk
  • 4,512
  • 2
  • 33
  • 49
8

As mentioned earlier, AWS Glue doesn't support mode="overwrite" mode. But converting Glue Dynamic Frame back to PySpark data frame can cause lot of issues with big data.

You just need to add signle command i.e. purge_s3_path() before writing dynamic_dataFrame to S3.

glueContext.purge_s3_path(s3_path,  {"retentionPeriod": 0})
glueContext.write_dynamic_frame.from_options(frame = table,
                                     connection_type = "s3",
                                     connection_options = {"path": s3_path,
                                                           "partitionKeys": ["var1","var2"]},
                                     format = "parquet")

Please refer : AWS Documentation

mtoto
  • 23,919
  • 4
  • 58
  • 71
Tushar Gupta
  • 171
  • 2
  • 9
5

If you don't want your process to overwrite everything under "s3://bucket/table_name", you could use

spark.conf.set("spark.sql.sources.partitionOverwriteMode","dynamic")
data.toDF()
    .write
    .mode("overwrite")
    .format("parquet")
    .partitionBy("date", "name")
    .save("s3://folder/<table_name>")

This will only update the "selected" partitions in that S3 location. In my case, I have 30 date-partitions in my DynamicFrame "data".

I'm using Glue 1.0 - Spark 2.4 - Python 2.

Zach
  • 862
  • 11
  • 10