15

I have created a docker volume for postgres on my local machine.

docker create volume postgres-data

Then I used this volume and run a docker.

docker run -it -v postgres-data:/var/lib/postgresql/9.6/main postgres

After that I did some database operations which got stored automatically in postgres-data. Now I want to copy that volume from my local machine to another remote machine. How to do the same.

Note - Database size is very large

murli2308
  • 2,976
  • 4
  • 26
  • 47

2 Answers2

17

If the second machine has SSH enabled you can use an Alpine container on the first machine to map the volume, bundle it up and send it to the second machine.

That would look like this:

docker run --rm -v <SOURCE_DATA_VOLUME_NAME>:/from alpine ash -c \
    "cd /from ; tar -cf - . " | \
    ssh <TARGET_HOST> \
    'docker run --rm -i -v <TARGET_DATA_VOLUME_NAME>:/to alpine ash -c "cd /to ; tar -xpvf - "'

You will need to change:

  • SOURCE_DATA_VOLUME_NAME
  • TARGET_HOST
  • TARGET_DATA_VOLUME_NAME

Or, you could try using this helper script https://github.com/gdiepen/docker-convenience-scripts

Hope this helps.

patzm
  • 973
  • 11
  • 23
Simon I
  • 3,406
  • 4
  • 26
  • 32
  • I got one error in cli. In another machine docker is installed with sudo. So I added sudo in your command. `sudo: no tty present and no askpass program specified write /dev/stdout: broken pipe` – murli2308 Mar 23 '17 at 12:38
  • I resolved the sudo issue. Now I get the error as `tar: short read. write /dev/stdout: broken pipe`. I tried to run docker on another machine but it does not get the data – murli2308 Mar 23 '17 at 13:03
  • Which method are you using? The single line or the helper script? – Simon I Mar 23 '17 at 13:20
  • Single line. Issue is resolved now. I copied volume to another machine. Your answer is correct, but data is too large, can we make tar or gzip and transfer ? – murli2308 Mar 23 '17 at 16:55
  • The single line is a tar, if you want to try compressing it then you could try adding in the -z option at other ends, it would look like something along the lines of `docker run --rm -v :/from alpine ash -c "cd /from ; tar -cfz - . " | ssh 'docker run --rm -i -v :/to alpine ash -c "cd /to ; tar -xpvfz - "` – Simon I Mar 23 '17 at 17:10
  • Thanks it is working perfect. Is there any way to making tar and physically transferring data ? Can you please explain the command ? – murli2308 Mar 24 '17 at 05:18
  • Notice, command in the answer is missing an `'` in the end of the line otherwise it works fine. Ensure that your destination containers are stopped before copying the volume – Peter Theill Sep 27 '17 at 23:30
  • syntax for compression should be `tar -czf -` and `tar -xpvzf -` (`-f -` is the final argument) – Jacob Dorman Jan 29 '20 at 05:14
4

I had an exact same problem but in my case, both volumes were in separate VPCs and couldn't expose SSH to outside world. I ended up creating dvsync which uses ngrok to create a tunnel between them and then use rsync over SSH to copy the data. In your case you could start the dvsync-server on your machine:

$ docker run --rm -e NGROK_AUTHTOKEN="$NGROK_AUTHTOKEN" \
  --mount source=postgres-data,target=/data,readonly \
  quay.io/suda/dvsync-server

and then start the dvsync-client on the target machine:

docker run -e DVSYNC_TOKEN="$DVSYNC_TOKEN" \
  --mount source=MY_TARGET_VOLUME,target=/data \
  quay.io/suda/dvsync-client

The NGROK_AUTHTOKEN can be found in ngrok dashboard and the DVSYNC_TOKEN is being shown by the dvsync-server in its stdout.

Once the synchronization is done, the dvsync-client container will stop.

suda
  • 2,604
  • 1
  • 27
  • 38