2

I created a temp table in my PostgreSQL DB using the following query

SELECT * INTO TEMP TABLE tempdata FROM data WHERE id=2004;

Now I want to create a backup of this temp table tempdata.
So i use the following command line execution

"C:\Program Files\PostgreSQL\9.0\bin\pg_dump.exe" -F t -a -U my_admin -t tempdata myDB >"e:\mydump.backup"  

I get a message saying

pg_dump: No matching tables were found  

Is it possible to create a dump of temp tables?
Am I doing it correctly?

P.S. : I would also want to restore the same.I don't want to use any extra components.

TIA.

Shirish11
  • 1,587
  • 5
  • 17
  • 39
  • It would help if you could give some background on what you are trying to achieve. What are you loading into these temp tables? Why pg_dump them? Also, how do you expect to restore a temp table - what result would you expect, given that temp tables are *temporary* and go away at the end of the session? Restoring a temp table would have no effect even if you could do it. – Craig Ringer Dec 14 '11 at 11:27
  • @CraigRinger I am trying to do something like [this](http://stackoverflow.com/questions/8489464/postgresql-dump-restore) Since my data is scattered over multiple tables I want to take a backup of the only some specific data from all the tables and dump it in a backup file.Later on I want to restore this data on some other system which may/may not have this data. – Shirish11 Dec 14 '11 at 11:38
  • @CraigRinger I have implemented this using non-temp tables but the solution is not that effective. – Shirish11 Dec 14 '11 at 11:43
  • 1
    The linked problem sounds like it was basically custom designed for `COPY (SELECT ....) TO 'filename'` and `COPY tablename FROM 'filename'`, except for the single file bit. For that: dump to multiple files, include a psql script that runs COPY for each of them for restore, and bundle it in a zip file. You now have a single file. – Craig Ringer Dec 14 '11 at 12:11
  • @CraigRinger I am implementing all this through an application.So problems with zipping of the file,and unzipping it. – Shirish11 Dec 14 '11 at 12:24
  • In that case I suspect you're stuffed. What's the app? You can't modify it to do something as simple as zipping up a file? Or reading the output from multiple `COPY ... TO stdout` commands and writing them to a text file with some `CREATE TABLE` and `COPY ... TO stdin` commands between? What kind of app is that brain-dead? – Craig Ringer Dec 14 '11 at 13:36
  • @CraigRinger never thought compression would have been so simple, Thank you for the idea. – Shirish11 Jan 05 '12 at 11:23

1 Answers1

6

I don't think you'll be able to use pg_dump for that temporary table. The problem is that temporary tables only exist within the session where they were created:

PostgreSQL instead requires each session to issue its own CREATE TEMPORARY TABLE command for each temporary table to be used. This allows different sessions to use the same temporary table name for different purposes, whereas the standard's approach constrains all instances of a given temporary table name to have the same table structure.

So you'd create the temporary table in one session but pg_dump would be using a different session that doesn't have your temporary table.

However, COPY should work:

COPY moves data between PostgreSQL tables and standard file-system files.

but you'll either be copying the data to the standard output or a file on the database server (which requires superuser access):

COPY with a file name instructs the PostgreSQL server to directly read from or write to a file. The file must be accessible to the server and the name must be specified from the viewpoint of the server.
[...]
COPY naming a file is only allowed to database superusers, since it allows reading or writing any file that the server has privileges to access.

So using COPY to dump the temporary table straight to a file might not be an option. You can COPY to the standard output though but how well that will work depends on how you're accessing the database.

You might have better luck if you didn't use temporary tables. You would, of course, have to manage unique table names to avoid conflicts with other sessions and you'd have to take care to ensure that your non-temporary temporary tables were dropped when you were done with them.

mu is too short
  • 426,620
  • 70
  • 833
  • 800
  • I can't use `COPY` since there are more than 1 tables to create a backup of.Any other suggestions? – Shirish11 Dec 14 '11 at 06:52
  • @Shirish11: COPY them one by one or don't use temp tables. If you don't need to worry about uniqueness (i.e. you can guarantee that only one session will need to write to your "temp" tables at a time then you can use non-temp tables and `pg_dump`. – mu is too short Dec 14 '11 at 07:09
  • Have been using this but its not a very effective one.Any errors while execution will cause me to loose all my data. (Is it that if I use the same session for creating my `temp tables` and use `pg_dump` it will workout?) – Shirish11 Dec 14 '11 at 07:23
  • 1
    I don't think you can attach `pg_dump` to an existing session. I don't think temp tables are the right tool in this case. – mu is too short Dec 14 '11 at 07:35
  • 1
    @shirish11 I think @muistooshort is quite right - temp tables are NOT the right tool for this job. Either COPY to multiple different files (possibly from within a PL/PgSQL function if you want to encapsulate the work) or use non-temp tables. If you want to isolate concurrent runs from each other, try creating your non-temp tables in different schema (see `CREATE SCHEMA`) and telling pg_dump to only dump the particular schema you're interested in. – Craig Ringer Dec 14 '11 at 11:25
  • @Craig: A schema and search path setting might help, I'm not sure off the top of my head if you can merge schemas during a dump or restore so you might still have the unique name problem; you should be able to reset the search path with a little script though. – mu is too short Dec 14 '11 at 17:51