I have 3 processes that are currently run in sequence
- Write dataframe to S3 Bucket A
- Write dataframe to S3 Bucket B
- Write dataframe to database
final_df.write.mode('overwrite').parquet(S3 BUCKET A)
final_df.write.partitionBy("PART 1","PART 2").mode('append').parquet(S3 BUCKET B)
write_to_jdbc(logger, transformed_dataframe, jdbc_url,db_user_nm, db_user_pwd, 'test_table', 'append')
Is there a way to execute these in parallel ?