6

I am currently backing up an entire database using pg_dump:

<?php

include_once("../db.php");

$password = getPGPassword();
$user = getPGUser();
$db = getPGDb();

putenv("PGPASSWORD=" . $password);
$dumpcmd = array("pg_dump", "-i", "-U", escapeshellarg($user), "-F", "c", "-b", "-v", "-f", escapeshellarg('/var/www/backups/backup-'.time().'.sql'), escapeshellarg($db));
exec( join(' ', $dumpcmd), $cmdout, $cmdresult );
putenv("PGPASSWORD");

?>

I know I can use psql to restore the entire database, but is there any way that I can selectively restore part of a table using a query? The simplest thing I can think of is creating a temporary database with psql, reading rows from the desired table, deleting conflicting rows based on primary serial key, and inserting into table.

Is there a better way to do this? I'll need full sql query functionality.

Dan Guzman
  • 43,250
  • 3
  • 46
  • 71
Max Hudson
  • 9,961
  • 14
  • 57
  • 107
  • 2
    Why bring PHP into the picture? – e4c5 Sep 05 '15 at 01:30
  • I've got a user interface and an admin interface. From the user interface you can make changes to the database. From the admin interface I'd like to be able to restore just one row using a PHP script so it doesn't have to be done manually/from the command line. – Max Hudson Sep 05 '15 at 01:31
  • In that case if you make your dump in the 'plain' format your PHP script should be able to iterate through the file until it finds the primary key that interests you and insert only that row. – e4c5 Sep 05 '15 at 02:04
  • @maxhud Have you seen this one http://stackoverflow.com/questions/1109061/insert-on-duplicate-update-in-postgresql/6527838#6527838 – sitilge Sep 29 '15 at 06:03
  • I'd heartily recommend you don't use PHP as it's another layer that could go wrong and with backups of data that is never a good thing. Could you use a cron job to do it regularly? – user3791372 Oct 01 '15 at 12:53
  • My current cron job runs a php script - the php script basically just runs a linux command, but it also deletes backups from older than 1 month so I don't continuously grow the number of backups I have. – Max Hudson Oct 01 '15 at 17:08
  • Had this issue before, wound up using COPY to export the backup table as CSV, then COPY to create a temp table in target database (which you can run all the SQL scripts you want to get the records you need), then INSERT ... SELECT to update the new table. Keep it all in SQL, so you can wrap it in transaction, and have Postgres clean up the temp table for you. pg_dump and SED (or PHP) is hard, slow, and error prone. Advantage too is that you can adjust old backups to new schema as well. – davmor Oct 02 '15 at 18:09

1 Answers1

1

In my opinion, the easiest effective solution is:

  • install a backup server on another machine,
  • perform dump / restore on a regular basis or as needed,
  • connect main and backup servers using foreign data wrapper postgres_fdw.

In my practice, backup server is mandatory even for relatively small projects. Data replication can be accomplished in many ways. Dump / restore (possibly using cron) is one of the simplest, but configuring streaming replication is also not especially difficult.

If we must take into account the costs, the backup server can be just any pc or laptop with any operating system. I think this can be done easily even at home.

The benefits of having a backup server are manifold. Sometimes it is live-saving solution.

klin
  • 112,967
  • 15
  • 204
  • 232