I have a big table in redshift I need to automate the process of archiving monthly data.
The current approach is as follows(Manual):
- unload the redshift query result to s3
- create new backup table
- copy files from s3 to redshift table
- remove data from the original table
I need to automate this approach,
Is using aws data pipeline a good approach?
Please suggest any other effective approach, examples appreciated.
Thanks for the help!