0

We generate a lot of DDL views.I want to create a back-up of all the DDL views in the pg_catalog or information_schema which updates it self everyday.

For example: If the number of views yesterday was 10 and I created 5 more views today, the backup table should update itself from 10 to 15 at a specific time.

Sam
  • 59
  • 7
  • Are you using any ETL tool or any frame work to load data run/schedule jobs? Create a simple insert into a `DDL_History` table Sql and schedule it. – demircioglu Jul 22 '19 at 19:42
  • Hi Its just a simple table which i want to keep a back up of. So the back up table will take data from the existing table. I want it to do it daily. No ETL tool or anything. Can you provide an example? – Sam Jul 22 '19 at 19:53
  • Creating the SQL to move data from one table to another is easy. You need have a scheduler to run that SQL daily because you can not schedule this in Redshift or any other database. It could be a Lambda function, cron job, a simple python script etc. This is the part you need to decide what to use. – demircioglu Jul 22 '19 at 20:37

1 Answers1

0

You can write a scheduled lambda function that queries to find the number of views, then updates the backup table based on that query. Documentation for how to connect to a Redshift cluster from a lambda function can be found here.

ketcham
  • 922
  • 4
  • 15
  • Hi,i want to do it using a sql query – Sam Jul 22 '19 at 18:57
  • Unfortunately there's no way to have a scheduled query with Redshift on its own... You will likely want to create a lambda to perform this query for you daily. This is of course assuming you don't want to perform the query manually every day :) You can find more here: https://stackoverflow.com/questions/42564910/how-to-execute-scheduled-sql-script-on-amazon-redshift – ketcham Jul 22 '19 at 19:00