361

I'm trying to backup/restore a PostgreSQL database as is explained on the Docker website, but the data is not restored.

The volumes used by the database image are:

VOLUME  ["/etc/postgresql", "/var/log/postgresql", "/var/lib/postgresql"]

and the CMD is:

CMD ["/usr/lib/postgresql/9.3/bin/postgres", "-D", "/var/lib/postgresql/9.3/main", "-c", "config_file=/etc/postgresql/9.3/main/postgresql.conf"]

I create the DB container with this command:

docker run -it --name "$DB_CONTAINER_NAME" -d "$DB_IMAGE_NAME"

Then I connect another container to insert some data manually:

docker run -it --rm --link "$DB_CONTAINER_NAME":db "$DB_IMAGE_NAME" sh -c 'exec bash'
psql -d test -h $DB_PORT_5432_TCP_ADDR
# insert some data in the db
<CTRL-D>
<CTRL-D>

The tar archive is then created:

$ sudo docker run --volumes-from "$DB_CONTAINER_NAME" --rm -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /etc/postgresql /var/log/postgresql /var/lib/postgresql

Now I remove the container used for the db and create another one, with the same name, and try to restore the data inserted before:

$ sudo docker run --volumes-from "$DB_CONTAINER_NAME" --rm -v $(pwd):/backup ubuntu tar xvf /backup/backup.tar 

But the tables are empty, why is the data not properly restored ?

Daniel Serodio
  • 4,229
  • 5
  • 37
  • 33
Carl Levasseur
  • 4,203
  • 3
  • 19
  • 19

15 Answers15

903

Backup your databases

docker exec -t your-db-container pg_dumpall -c -U postgres > dump_`date +%d-%m-%Y"_"%H_%M_%S`.sql

Restore your databases

cat your_dump.sql | docker exec -i your-db-container psql -U postgres
Soviut
  • 88,194
  • 49
  • 192
  • 260
Forth
  • 9,177
  • 2
  • 12
  • 8
  • 4
    Yep, that's the postgres way to do it, but I think the docker way should always be prefered when you use it – Carl Levasseur Apr 29 '15 at 09:31
  • 90
    To save some space on disk you might want to pipe the dump to gzip: `docker exec -t your-db-container pg_dumpall -c -U postgres | gzip > /var/data/postgres/backups/dump_`date +%d-%m-%Y"_"%H_%M_%S`.gz` – Tarion Oct 24 '16 at 16:08
  • 1
    @Tarion How can I restore `.xz` or `.gz` packed this way? – kasiacode May 04 '17 at 09:42
  • 3
    Just unzip the data before you restore it. To do it as a one liner you will have to replace the `cat your_dump.sql` with the unzip command and pipe that instead of the `cat` result to docker exec. – Tarion May 04 '17 at 18:11
  • when I run the backup command it says **"docker exec" requires at least 2 argument(s).** – Tanzil Khan Jul 02 '17 at 11:08
  • I'm running Docker on Windows and this works fine. But I'm wondering... what would a regular backup schedule look like? I don't even want to lose my data. – Christopher Painter Jul 26 '17 at 12:47
  • Problem with this approach is providing the password. If the password prompt is displayed the backup will fail. But it's possible to provide password through postgres ENV variables. – andho Sep 15 '17 at 08:08
  • 3
    The date format is messed up, so double check that before you copy and paste. – vidstige May 01 '18 at 23:35
  • 2
    Docker is often an insidious, low quality, leaky abstraction over running processes. I struggled to do this simple task for over an hour, and the answer is something that's never used in a normal postgres flow, which we have to do because docker doesn't support better command execution. – Andy Ray Apr 15 '19 at 03:00
  • 20
    For those would couldn't figure out how to get the date formatting working: `docker exec -t your-db-container pg_dumpall -c -U postgres | gzip > ./tmp/dump_$(date +"%Y-%m-%d_%H_%M_%S").gz` – 9_Dave_9 May 16 '20 at 11:27
  • 28
    When restoring the database, make sure you add `-d your-db-name` to the restore command **if** your database isn't named `postgres`. – J86 Oct 26 '20 at 22:15
  • 3
    With the given commands, you may run into an ugly surprise if your DB contains UTF-8 characters. See [this question](https://stackoverflow.com/questions/63934856/why-is-pg-restore-segfaulting-in-docker) for more details and a solution. – blubb Nov 27 '20 at 09:57
  • 'gzip' is not recognized as an internal or external command, operable program or batch file. on Windows. –  May 28 '21 at 20:12
  • looks like this may produce some problem with encodings, I've got ????? instead of some cyrillics texts – YakovL Sep 15 '21 at 15:12
  • Note: For `docker-compose` users, that's `docker-compose exec -T` (capital T) – Kabir Sarin Feb 05 '22 at 22:36
  • How do we use this approach with DB volumes? I currently have the following volume for my PostgreSQL container; ` volumes: - database_data:/var/lib/postgresql ` – FARZAD Jan 12 '23 at 13:07
  • @AndyRay at any time you can `docker exec -it ${container_name} bash -il` and break into a fully functional CLI for postgres. Any work can be saved with `docker commit`. Any files can be copied with `docker cp`. A LOT has changed in the 4 years since your comment. – Derek Adair Jan 17 '23 at 13:39
  • when restoring im facing this issue ``` ERROR: database "template1" does not exist \connect: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: FATAL: database "template1" does not exist ``` – Anish Jul 24 '23 at 08:04
107

Backup Database

generate sql:

  • docker exec -t your-db-container pg_dumpall -c -U your-db-user > dump_$(date +%Y-%m-%d_%H_%M_%S).sql

to reduce the size of the sql you can generate a compress:

  • docker exec -t your-db-container pg_dumpall -c -U your-db-user | gzip > ./dump_$(date +"%Y-%m-%d_%H_%M_%S").gz

Restore Database

  • cat your_dump.sql | docker exec -i your-db-container psql -U your-db-user -d your-db-name

to restore a compressed sql:

  • gunzip < your_dump.sql.gz | docker exec -i your-db-container psql -U your-db-user -d your-db-name

PD: this is a compilation of what worked for me, and what I got from here and elsewhere. I am beginning to make contributions, any feedback will be appreciated.

  • using "cat your_dump.sql | .... " to restore a db I think has a really low performance, am I wrong? – EmiliOrtega Jan 09 '22 at 15:12
  • You forgot a sql extension before gz: docker exec -t your-db-container pg_dumpall -c -U your-db-user | gzip > ./dump_$(date +"%Y-%m-%d_%H_%M_%S").sql.gz – reza Jan 23 '23 at 12:04
  • Scenario: `user` table has 2 users. We make a backup. We register another user, so table has 3 entries. We restore backup with `cat your_dump.sql | docker exec -i your-db-container psql -U your-db-user -d your-db-name`. Result: user table still has 3 entries; backup did not restore database state at backup. P.s. I'm beginner too, maybe I hold incorrect assumptions how backups work. – yomajo Jan 26 '23 at 17:50
84

I think you can also use a postgres backup container which would backup your databases within a given time duration.

  pgbackups:
    container_name: Backup
    image: prodrigestivill/postgres-backup-local
    restart: always
    volumes:
      - ./backup:/backups
    links:
      - db:db
    depends_on:
      - db
    environment:
      - POSTGRES_HOST=db
      - POSTGRES_DB=${DB_NAME} 
      - POSTGRES_USER=${DB_USER}
      - POSTGRES_PASSWORD=${DB_PASSWORD}
      - POSTGRES_EXTRA_OPTS=-Z9 --schema=public --blobs
      - SCHEDULE=@every 0h30m00s
      - BACKUP_KEEP_DAYS=7
      - BACKUP_KEEP_WEEKS=4
      - BACKUP_KEEP_MONTHS=6
      - HEALTHCHECK_PORT=81
Black
  • 5,023
  • 6
  • 63
  • 92
Tharindu Pradeep
  • 1,152
  • 11
  • 16
44

cat db.dump | docker exec ... way didn't work for my dump (~2Gb). It took few hours and ended up with out-of-memory error.

Instead, I cp'ed dump into container and pg_restore'ed it from within.

Assuming that container id is CONTAINER_ID and db name is DB_NAME:

# copy dump into container
docker cp local/path/to/db.dump CONTAINER_ID:/db.dump

# shell into container
docker exec -it CONTAINER_ID bash

# restore it from within
pg_restore -U postgres -d DB_NAME --no-owner -1 /db.dump
Alex Fedoseev
  • 1,135
  • 11
  • 18
  • this approach, unlike the accepted one, helped me to get correct encodings when doing this in Windows. (Actually, I applied the dump by going `cat /home/db.sql | psql -U postgres -d DB_NAME -p DB_PORT`) – YakovL Sep 16 '21 at 07:28
  • This works much faster. One could make a backupscript based on this, build an image with the script included, and fire the script from the host through a cronjob. Mount a host-volume onto the container, and have a backupserver pull the daily sql-dumps. – mistige Oct 06 '21 at 10:43
  • I appreciate this because I had the exact same problem with `cat db.dum |...` – EmiliOrtega Jan 09 '22 at 15:13
13

Okay, I've figured this out. Postgresql does not detect changes to the folder /var/lib/postgresql once it's launched, at least not the kind of changes I want it do detect.

The first solution is to start a container with bash instead of starting the postgres server directly, restore the data, and then start the server manually.

The second solution is to use a data container. I didn't get the point of it before, now I do. This data container allows to restore the data before starting the postgres container. Thus, when the postgres server starts, the data are already there.

Carl Levasseur
  • 4,203
  • 3
  • 19
  • 19
11

The below command can be used to take dump from docker postgress container

docker exec -t <postgres-container-name> pg_dump --no-owner -U <db-username> <db-name> > file-name-to-backup-to.sql
Shubham
  • 1,740
  • 1
  • 15
  • 17
  • 1
    Caution: I experienced a broken backup file when I used `pg_dump -F c` (custom format) with the `docker exec -t` option. I assume the terminal mode interferes with the piped binary output. Do not use `docker exec -t` (or `-i`). – not2savvy Jan 13 '23 at 13:26
8

The top answer didn't work for me. I kept getting this error:

psql: error: FATAL:  Peer authentication failed for user "postgres"

To get it to work I had to specify a user for the docker container:

Backup

docker exec -t --user postgres your-db-container pg_dumpall -c -U postgres > dump_`date +%d-%m-%Y"_"%H_%M_%S`.sql

Restore

cat your_dump.sql | docker exec -i --user postgres your-db-container psql -U postgres
Marty
  • 157
  • 2
  • 5
6

Another approach (based on docker-postgresql-workflow)

Local running database (not in docker, but same approach would work) to export:

pg_dump -F c -h localhost mydb -U postgres export.dmp

Container database to import:

docker run -d -v /local/path/to/postgres:/var/lib/postgresql/data postgres #ex runs container as `CONTAINERNAME` #find via `docker ps`
docker run -it --link CONTAINERNAME:postgres  --volume $PWD/:/tmp/  postgres  bash -c 'exec pg_restore -h postgres -U postgres -d mydb -F c /tmp/sonar.dmp'
sjakubowski
  • 2,913
  • 28
  • 36
4

I had this issue while trying to use a db_dump to restore a db. I normally use dbeaver to restore- however received a psql dump, so had to figure out a method to restore using the docker container.

The methodology recommended by Forth and edited by Soviut worked for me:

cat your_dump.sql | docker exec -i your-db-container psql -U postgres -d dbname

(since this was a single db dump and not multiple db's i included the name)

However, in order to get this to work, I had to also go into the virtualenv that the docker container and project were in. This eluded me for a bit before figuring it out- as I was receiving the following docker error.

read unix @->/var/run/docker.sock: read: connection reset by peer

This can be caused by the file /var/lib/docker/network/files/local-kv.db .I don't know the accuracy of this statement: but I believe I was seeing this as I do not user docker locally, so therefore did not have this file, which it was looking for, using Forth's answer.

I then navigated to correct directory (with the project) activated the virtualenv and then ran the accepted answer. Boom, worked like a top. Hope this helps someone else out there!

Eric Aya
  • 69,473
  • 35
  • 181
  • 253
activereality
  • 41
  • 1
  • 5
3

dksnap (https://github.com/kelda/dksnap) automates the process of running pg_dumpall and loading the dump via /docker-entrypoint-initdb.d.

It shows you a list of running containers, and you pick which one you want to backup. The resulting artifact is a regular Docker image, so you can then docker run it, or share it by pushing it to a Docker registry.

(disclaimer: I'm a maintainer on the project)

Kevin Lin
  • 116
  • 1
  • 3
  • great! looking forward for "A non-graphical CLI interface that's scriptable." so that I can use it from Robot Framework tests :) – Wlad Aug 18 '20 at 00:08
1

This is the command worked for me.

cat your_dump.sql | sudo docker exec -i {docker-postgres-container} psql -U {user} -d {database_name}

for example

cat table_backup.sql | docker exec -i 03b366004090 psql -U postgres -d postgres

Reference: Solution given by GMartinez-Sisti in this discussion. https://gist.github.com/gilyes/525cc0f471aafae18c3857c27519fc4b

Jacob Nelson
  • 2,370
  • 23
  • 35
  • I am getting back `invalid command \N` in the terminal when i run the command `cat your_dump.sql | sudo docker exec -i {docker-postgres-container} psql -U {user} -d {database_name} ` – Mohamed Ali Apr 15 '21 at 17:57
1

Using a File System Level Backup on Docker Volumes

Example Docker Compose

version: "3.9"

services:
  db:
    container_name: pg_container
    image: platerecognizer/parkpow-postgres
    # restart: always
    volumes:
      - postgres_data:/var/lib/postgresql/data/
    environment:
      POSTGRES_USER: admin
      POSTGRES_PASSWORD: admin
      POSTGRES_DB: admin

volumes:
  postgres_data:

Backup Postgresql Volume

docker run --rm \
   --user root \
   --volumes-from pg_container \
   -v /tmp/db-bkp:/backup \
   ubuntu tar cvf /backup/db.tar /var/lib/postgresql/data

Then copy /tmp/db-bkp to second host

Restore Postgresql Volume

docker run --rm \
   --user root \
   --volumes-from pg_container \
   -v /tmp/db-bkp:/backup \
   ubuntu bash -c "cd /var && tar xvf /backup/db.tar --strip 1"
danleyb2
  • 988
  • 12
  • 18
1

Solution for docker-compose users:

  1. At First run the docker-compose file by any on of following commands: $ docker-compose -f loca.yml up OR docker-compose -f loca.yml up -d
  2. For taking backup: $ docker-compose -f local.yml exec postgres backup
  3. To see list of backups inside container: $ docker-compose -f local.yml exec postgres backups
  4. Open another terminal and run following command: $ docker ps
  5. Look for the CONTAINER ID of postgres image and copy the ID. Let's assume the CONTAINER ID is: ba78c0f9bcee
  6. Now to bring that backup into your local file system, run the following command: $ docker cp ba78c0f9bcee:/backups ./local_backupfolder

Hope this will help someone who was lost just like me..

N.B: The full details of this solution can be found here.

Farid Chowdhury
  • 2,766
  • 1
  • 26
  • 21
1

Another way to do it is to run the pg_restore (of course if you have postgres set up in your host machine) command from the host machine.

Assuming that you have port mapping "5436:5432" for the postgres service in your docker-compose file. Having this port mapping will let you access the container's postgres (running on port 5432) via your host machine's port 5436

pg_restore -h localhost -p 5436 -U <POSTGRES_USER> -d <POSTGRES_DB>  /Path/to/the/.psql/file/in/your/host_machine 

This way you do not have to dive into the container's terminal or copy the dump file to the container.

sajid
  • 807
  • 1
  • 9
  • 23
0

I would like to add the official docker documentation for backups and restores. This applies to all kinds of data within a volume, not just postegres.

Backup a container

Create a new container named dbstore:

$ docker run -v /dbdata --name dbstore ubuntu /bin/bash

Then in the next command, we:

  • Launch a new container and mount the volume from the dbstore container

  • Mount a local host directory as /backup

  • Pass a command that tars the contents of the dbdata volume to a backup.tar file inside our /backup directory.

    $ docker run --rm --volumes-from dbstore -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata

When the command completes and the container stops, we are left with a backup of our dbdata volume.

Restore container from backup

With the backup just created, you can restore it to the same container, or another that you made elsewhere.

For example, create a new container named dbstore2:

$ docker run -v /dbdata --name dbstore2 ubuntu /bin/bash

Then un-tar the backup file in the new container`s data volume:

$ docker run --rm --volumes-from dbstore2 -v $(pwd):/backup ubuntu bash -c "cd /dbdata && tar xvf /backup/backup.tar --strip 1"

You can use the techniques above to automate backup, migration and restore testing using your preferred tools.

Ryan McGrath
  • 2,180
  • 1
  • 11
  • 10