405

I am trying to copy an entire table from one database to another in Postgres. Any suggestions?

Braiam
  • 1
  • 11
  • 47
  • 78
nix
  • 4,501
  • 5
  • 22
  • 19
  • 7
    If you're okay with installing DBeaver, it has a really simple way of transferring between two databases you're connected to. Just right click the source table and select Export Data, target a Database table(s) and set the target as the destination database. – rovyko Mar 22 '20 at 07:55
  • 1
    @rovyko I'm trying to do the same thing in DBeaver but using dynamic sql. Please let me know if you know how to do it. – Roshan Maharjan Aug 02 '22 at 08:30

25 Answers25

456

Extract the table and pipe it directly to the target database:

pg_dump -t table_to_copy source_db | psql target_db

Note: If the other database already has the table set up, you should use the -a flag to import data only, else you may see weird errors like "Out of memory":

pg_dump -a -t table_to_copy source_db | psql target_db
thomax
  • 9,213
  • 3
  • 49
  • 68
  • 9
    How will this work for remote-db links? E.g., I need to dump from a different location. – curlyreggie Jan 06 '14 at 13:09
  • 29
    @curlyreggie havn't tried this, but I see no reason why it wouldn't work. Try adding user and server specifics to the command, like so `pg_dump -U remote_user -h remote_server -t table_to_copy source_db | psql target_db` – thomax Jan 06 '14 at 14:28
  • 5
    You can try this: "pg_dump -U remote_user -h remote_server -t table_to_copy source_db | psql target_db -U remote_user -h remote_server " – Hua Zhang Dec 22 '15 at 18:22
  • 23
    note that if the other database already has the table set up, you should use the `-a` flag for _data only_. i.e. `pg_dump -a -t my_table my_db | psql target_db`. While I'm here, If your database is on a server, I find it easier to just dump the database to a file and then scp that file to the database, then send the contents of the file to psql. e.g.`pg_dump -a -t my_table my_db > my_file.sql` and after putting that on your server --> `psql my_other_db < my_file.sql` – Nick Brady Mar 28 '16 at 23:51
  • How do I get this to work for case sensitive "table_to_copy"? Doesn't work for me when I used: create TABLE "table_to_copy" (arguments...) – Eamonn Kenny Dec 12 '16 at 11:33
  • 5
    @EamonnKenny to dump a case-sensitive table, do: `pg_dump -t '"tableToCopy"' source_db | psql target_db`. Note that single AND double quotes surround the table name – gilad905 May 09 '17 at 14:04
  • @thomax this is not working for me while running the line in the terminal, have they made changes along these years? – Luffydude Jun 01 '18 at 11:25
  • A pitfall is when not all extensions in the source database exist in the target database. You will get many warnings as if there is sql injection.. – Jan Apr 10 '20 at 18:58
  • Is it possible to copy all tables with that are within a certain schema to another remote DB to a identical schema? – Mario Jan 06 '21 at 12:43
  • 2
    Is it possible to copy a table to another database but with a different (existing) schema? Is there any switch I could add to the `psql target_db` part? – 588chm Jun 02 '21 at 07:16
  • I get a ton of syntax errors, apparently from psql trying to interpret the content in my fields - ? – Dogweather Jun 26 '21 at 05:49
  • hey does anyone have any idea, how to implement this method if we are trying to if the two tables (source and target tables) exist on same database but different schemas? – Ibrahim Noor Jan 10 '22 at 08:06
151

You can also use the backup functionality in pgAdmin II. Just follow these steps:

  • In pgAdmin, right click the table you want to move, select "Backup"
  • Pick the directory for the output file and set Format to "plain"
  • Click the "Dump Options #1" tab, check "Only data" or "only Schema" (depending on what you are doing)
  • Under the Queries section, click "Use Column Inserts" and "User Insert Commands".
  • Click the "Backup" button. This outputs to a .backup file
  • Open this new file using notepad. You will see the insert scripts needed for the table/data. Copy and paste these into the new database sql page in pgAdmin. Run as pgScript - Query->Execute as pgScript F6

Works well and can do multiple tables at a time.

a2ron44
  • 1,711
  • 1
  • 13
  • 18
  • 4
    This is a good gui-based solution for moving data between databases. Thanks! – kgx Mar 07 '13 at 19:42
  • 3
    You can select multiple tables under the `Objects` section. On OSX, click the SQL button or get the `SQL Editor` via the `Tools` menu to paste in the SQL copied from the backup file. – Aleck Landgraf Aug 27 '13 at 18:47
  • works, thanks. Very slow though on big tables.. is there a better way to do it to speed it up? (like ignore foreign keys or something?) – TimoSolo Sep 04 '13 at 12:39
  • 3
    @Timothy Here's [the postgres documentation page](http://www.postgresql.org/docs/9.3/static/populate.html) on how to speed up backing up and restoring – laurie Sep 27 '13 at 10:33
  • old answer but still relevant, works great, just don't forget to set Disable triggers when exporting all database – norbertas.gaulia Sep 04 '15 at 15:38
  • In the background, Pgadmin creates a file holding all necessary SQL INSERT statements with this command: `pg_dump --file "/my/path/filename.txt" --host "myhost" --port "myport" --username "myusername" --no-password --verbose --format=p --table "mytable" "mydb" --data-only --column-inserts --inserts` – elke Dec 30 '17 at 15:42
  • Just remember to check through the other parts of the scripts for anything important like: ALTER TABLE ONLY ADD CONSTRAINT PRIMARY KEY (field1, field2, field3, etc); You may also see things like : COMMENT ON TABLE IS 'This text describes what the table is used for, etc'; Just copy and paste these into the Query Editor along with the CREATE TABLE one. – Paul Pritchard Sep 25 '20 at 10:12
112

Using dblink would be more convenient!

truncate table tableA;

insert into tableA
select *
from dblink('hostaddr=xxx.xxx.xxx.xxx dbname=mydb user=postgres',
            'select a,b from tableA')
       as t1(a text,b text);
SebaGra
  • 2,801
  • 2
  • 33
  • 43
tinychen
  • 1,949
  • 2
  • 11
  • 8
43

Using psql, on linux host that have connectivity to both servers

( export PGPASSWORD=password1 
  psql -U user1 -h host1 database1 \
  -c "copy (select field1,field2 from table1) to stdout with csv" ) \
| 
( export PGPASSWORD=password2 
  psql -U user2 -h host2 database2 \ 
   -c "copy table2 (field1, field2) from stdin csv" )
divinedragon
  • 5,105
  • 13
  • 50
  • 97
Alexey Sviridov
  • 3,360
  • 28
  • 33
  • 1
    No need for export, `PGPASSWORD=password1 psql -U ...` then you don't even need explicit subshells! Ordinarily, you'll want to do a couple things to set up first, so subshells may be necessary anyway. Also, the passwords won't be exported into subsequent processes. Thanks! – lmat - Reinstate Monica Dec 14 '19 at 15:40
  • 1
    @LimitedAtonement Actually you right, export and subshells isn't necessary. It's just a part of more complicated script, and even i didn't try without export and subshells, so, i provide it as is only to be honest and provide worked solution – Alexey Sviridov Dec 15 '19 at 07:37
  • 3
    The table must exist in destination DB. To create it, try `pg_dump -t '' --schema-only` – fjsj May 21 '20 at 19:36
  • 2
    Put passwords to [`~/.pgpass`](https://www.postgresql.org/docs/current/libpq-pgpass.html). – Hans Ginzel Mar 22 '21 at 21:16
24

First install dblink

Then, you would do something like:

INSERT INTO t2 select * from 
dblink('host=1.2.3.4
 user=*****
 password=******
 dbname=D1', 'select * t1') tt(
       id int,
  col_1 character varying,
  col_2 character varying,
  col_3 int,
  col_4 varchar 
);
Felipe Augusto
  • 7,733
  • 10
  • 39
  • 73
  • 2
    This answer is great because it allows one to filter copied rows (add WHERE clause in the dblink 2nd argument). However, one needs to be explicit about column names (Postgres 9.4) with something like: `INSERT INTO l_tbl (l_col1, l_col2, l_col3) SELECT * FROM dblink('dbname=r_db hostaddr=r_ip password=r_pass user=r_usr', 'select r_col1, r_col2, r_col3 from r_tbl where r_col1 between ''2015-10-29'' AND ''2015-10-30'' ') AS t1(col1 MACADDR, col2 TIMESTAMP, col3 NUMERIC(7,1));` (l means local, r is remote. Escape single quotes. Provide col types.) – hamx0r Nov 04 '15 at 16:48
24

If you have both remote server then you can follow this:

pg_dump -U Username -h DatabaseEndPoint -a -t TableToCopy SourceDatabase | psql -h DatabaseEndPoint -p portNumber -U Username -W TargetDatabase

It will copy the mentioned table of source Database into same named table of target database, if you already have existing schema.

Piyush S. Wanare
  • 4,703
  • 6
  • 37
  • 54
15

Use pg_dump to dump table data, and then restore it with psql.

Pablo Santa Cruz
  • 176,835
  • 32
  • 241
  • 292
  • 2
    Then use another databaserole to connect, a role that has enough permissions. http://www.postgresql.org/docs/8.4/static/app-pgdump.html – Frank Heikens Jul 07 '10 at 13:43
  • What am I doing wrong? pg_dump -t "tablename" dbName --role "postgres" > db.sql "postgres" would be the user I'm trying to set the role to. It still gives me "Access is denied". – nix Jul 07 '10 at 14:48
  • Do you have permissions to write the db.sql file? – pcent Jul 07 '10 at 16:29
  • How do I check what permissions I have? – nix Jul 07 '10 at 17:27
  • This thread is old, but for anyone else having the problem, try using the 'Tools --> Backup' menu in PgAdminIII, which seems to get around the permission problems. – John May 30 '19 at 18:58
  • 2
    not really a helpful answer, given that the other answers show you how to use pg_dump – scottlittle Sep 23 '20 at 21:28
11

Here is what worked for me. First dump to a file:

pg_dump -h localhost -U myuser -C -t my_table -d first_db>/tmp/table_dump

then load the dumped file:

psql -U myuser -d second_db</tmp/table_dump
max
  • 9,708
  • 15
  • 89
  • 144
10

You could do the following:

pg_dump -h <host ip address> -U <host db user name> -t <host table> > <host database> | psql -h localhost -d <local database> -U <local db user>

Gowtham Balusamy
  • 728
  • 10
  • 22
8

To move a table from database A to database B at your local setup, use the following command:

pg_dump -h localhost -U owner-name -p 5432 -C -t table-name database1 | psql -U owner-name -h localhost -p 5432 database2
Felipe Augusto
  • 7,733
  • 10
  • 39
  • 73
RKT
  • 174
  • 1
  • 4
7

Same as answers by user5542464 and Piyush S. Wanare but split in two steps:

pg_dump -U Username -h DatabaseEndPoint -a -t TableToCopy SourceDatabase > dump
cat dump | psql -h DatabaseEndPoint -p portNumber -U Username -W TargetDatabase

otherwise the pipe asks the two passwords in the same time.

Community
  • 1
  • 1
Adobe
  • 12,967
  • 10
  • 85
  • 126
7

I was using DataGrip (By Intellij Idea). and it was very easy copying data from one table (in a different database to another).

First, make sure you are connected with both DataSources in Data Grip.

Select Source Table and press F5 or (Right-click -> Select Copy Table to.)

This will show you a list of all tables (you can also search using a table name in the popup window). Just select your target and press OK.

DataGrip will handle everything else for you.

Developine
  • 12,483
  • 8
  • 38
  • 42
5

I tried some of the solutions here and they were really helpful. In my experience best solution is to use psql command line, but sometimes i don't feel like using psql command line. So here is another solution for pgAdminIII

create table table1 as(
 select t1.* 
 from dblink(
   'dbname=dbSource user=user1 password=passwordUser1',
   'select * from table1'  
  ) as t1(
    fieldName1 as bigserial,
    fieldName2 as text,
    fieldName3 as double precision 
  )
 )

The problem with this method is that the name of the fields and their types of the table you want to copy must be written.

Michał Szkudlarek
  • 1,443
  • 1
  • 21
  • 35
Eloy A
  • 51
  • 1
  • 2
5

pg_dump does not work always.

Given that you have the same table ddl in the both dbs you could hack it from stdout and stdin as follows:

 # grab the list of cols straight from bash

 psql -d "$src_db" -t -c \
 "SELECT column_name 
 FROM information_schema.columns 
 WHERE 1=1 
 AND table_name='"$table_to_copy"'"
 # ^^^ filter autogenerated cols if needed     

 psql -d "$src_db" -c  \
 "copy ( SELECT col_1 , col2 FROM table_to_copy) TO STDOUT" |\
 psql -d "$tgt_db" -c "\copy table_to_copy (col_1 , col2) FROM STDIN"
Felipe Augusto
  • 7,733
  • 10
  • 39
  • 73
Yordan Georgiev
  • 5,114
  • 1
  • 56
  • 53
4

Check this python script

python db_copy_table.py "host=192.168.1.1 port=5432 user=admin password=admin dbname=mydb" "host=localhost port=5432 user=admin password=admin dbname=mydb" alarmrules -w "WHERE id=19" -v
Source number of rows = 2
INSERT INTO alarmrules (id,login,notifybyemail,notifybysms) VALUES (19,'mister1',true,false);
INSERT INTO alarmrules (id,login,notifybyemail,notifybysms) VALUES (19,'mister2',true,false);
themadmax
  • 2,344
  • 1
  • 31
  • 36
4

As an alternative, you could also expose your remote tables as local tables using the foreign data wrapper extension. You can then insert into your tables by selecting from the tables in the remote database. The only downside is that it isn't very fast.

ThatDataGuy
  • 1,969
  • 2
  • 17
  • 43
3

If the both DBs(from & to) are password protected, in that scenario terminal won't ask for the password for both the DBs, password prompt will appear only once. So, to fix this, pass the password along with the commands.

PGPASSWORD=<password> pg_dump -h <hostIpAddress> -U <hostDbUserName> -t <hostTable> > <hostDatabase> | PGPASSWORD=<pwd> psql -h <toHostIpAddress> -d <toDatabase> -U <toDbUser>
Dante
  • 31
  • 1
3

for DBeaver tool users, you can "Export data" to table in another database.

enter image description here

Only error I kept facing was because of wrong postgres driver.

SQL Error [34000]: ERROR: portal "c_2" does not exist
    ERROR: Invalid protocol sequence 'P' while in PortalSuspended state.

Here is a official wiki on how to export data: https://github.com/dbeaver/dbeaver/wiki/Data-transfer

prayagupa
  • 30,204
  • 14
  • 155
  • 192
2

You have to use DbLink to copy one table data into another table at different database. You have to install and configure DbLink extension to execute cross database query.

I have already created detailed post on this topic. Please visit this link

Anvesh
  • 7,103
  • 3
  • 45
  • 43
1

You can do in Two simple steps:

# dump the database in custom-format archive
pg_dump -Fc mydb > db.dump

# restore the database
pg_restore -d newdb db.dump

In case of Remote Databases:

# dump the database in custom-format archive
pg_dump -U mydb_user -h mydb_host -t table_name -Fc mydb > db.dump

# restore the database
pg_restore -U newdb_user -h newdb_host -d newdb db.dump
OM Bharatiya
  • 1,840
  • 14
  • 23
1

Combining this answer and this answer, which is more convenient as you don't need to specify the columns:

TRUNCATE TABLE tableA;

INSERT INTO tableA
SELECT (rec).*
FROM dblink('hostaddr=xxx.xxx.xxx.xxx dbname=mydb user=postgres',
            'SELECT myalias FROM tableA myalias')
       AS t1(rec tableA);
pkExec
  • 1,752
  • 1
  • 20
  • 39
0

if you want to copy data from one server database to another server database then you have create dblink connection both database otherwise you can export the table data in csv and import the data in other database table, table fields should be same as primary table.

0

Without any piping, on Windows, you can use:

Dump - Edit this to be on one line

"C:\Program Files\PostgreSQL\14\bin\pg_dump.exe"
--host="host-postgres01"
--port="1234"
--username="user01"
-t "schema01.table01"
--format=c
-f "C:\Users\user\Downloads\table01_format_c.sql"
"DB-01"

Restore - Edit this to be on one line

"C:\Program Files\PostgreSQL\14\bin\pg_restore.exe"
--host="host-postgres02"
--port="5678"
--username="user02"
-1
--dbname="DB-02"
"C:\Users\user\Downloads\table01_format_c.sql"

You will be prompted for user passwords.

This solution will put the new table in a schema with the same name (schema01).

scrollout
  • 449
  • 4
  • 19
0

It could be done fairly simple manner. Just use the following command

pg_dump –U <user_name> –t <table_name> <source_database> | psql –U <user_name> <targeted_database>

replace values in <> with your specific parameters and also remove <>.

-1

If you run pgAdmin (Backup: pg_dump, Restore: pg_restore) from Windows it will try to output the file by default to c:\Windows\System32 and that's why you will get Permission/Access denied error and not because the user postgres is not elevated enough. Run pgAdmin as Administrator or just choose a location for the output other than system folders of Windows.

Felipe Augusto
  • 7,733
  • 10
  • 39
  • 73
Imre
  • 474
  • 5
  • 11