I currently have a script that reads the existing version of a csv saved to s3, combines that with the new rows in the pandas dataframe, and then writes that directly back to s3.
try:
csv_prev_content = str(s3_resource.Object('bucket-name', ticker_csv_file_name).get()['Body'].read(), 'utf8')
except:
csv_prev_content = ''
csv_output = csv_prev_content + curr_df.to_csv(path_or_buf=None, header=False)
s3_resource.Object('bucket-name', ticker_csv_file_name).put(Body=csv_output)
Is there a way that I can do this but with a gzip compressed csv? I want to read an existing .gz compressed csv on s3 if there is one, concatenate it with the contents of the dataframe, and then overwrite the .gz with the new combined compressed csv directly in s3 without having to make a local copy.