0

I have a large file on my laptop (localhost). I would like to copy this file to a docker container which is located on a remote server. I know how to do it in two steps, i.e. I first copy the file to my remote server and then I copy the file from remote server to the docker container. But, for obvious reasons, I want to avoid this.

A similar question which has a complicated answer is covered here: Copy file from remote docker container

However in this question, the direction is reversed, the file is copied from the remote container to localhost.

Additional request: is it possible that this upload can be done piece-wise or that in case of a network failure I can resume the upload from where it stopped, instead of having to upload the entire file again? I ask because the file is fairly large, ~13GB.

waykiki
  • 914
  • 2
  • 9
  • 19
  • `Copy file from localhost to docker container on remote` `copied from the remote container to localhost.` so which way is it? Who initiates the connection? – KamilCuk Jan 19 '23 at 13:59
  • I'm sorry, what? It's localhost --> remote server --> docker container. – waykiki Jan 19 '23 at 14:02
  • The local system, the remote system, and the container each believe they're `localhost`. You might clarify this in your question. – David Maze Jan 19 '23 at 14:25
  • The container filesystem is intrinsically temporary, though, and I'd find it a little bit unusual to `docker cp` files into it, especially if it's data the container needs to run. Can you bind-mount parts of the remote host filesystem into the container, so that there's not the second copy step and the data won't be lost when the container exits? – David Maze Jan 19 '23 at 14:26

3 Answers3

0

From https://docs.docker.com/engine/reference/commandline/cp/#corner-cases and https://www.cyberciti.biz/faq/howto-use-tar-command-through-network-over-ssh-session/ you would just do:

tar Ccf $(dirname SRC_PATH) - $(basename SRC_PATH) | ssh you@host docker exec -i CONTAINER tar Cxf DEST_PATH -

or

tar Ccf $(dirname SRC_PATH) - $(basename SRC_PATH) | ssh you@host docker cp - CONTAINER:DEST_PATH

Or untested, no idea if this works:

DOCKER_HOST=ssh://you@host docker cp SRC_PATH CONTAINER:DEST_PATH
KamilCuk
  • 120,984
  • 8
  • 59
  • 111
0

This will work if you are running a *nix server and a docker with ssh server in it.

You can create a local tunnel on the remote server by following these steps:

mkfifo host_to_docker
netcat -lkp your_public_port < host_to_docker | nc docker_ip_address 22 > host_to_docker &

First command will create a pipe that you can check with file host_to_docker.

Second one is the greatest network utility of all times that is netcat. It just accepts a tcp connection and forwards it to another netcat instance, receiving and forwarding underlying ssh messages to the ssh server running on docker and writing its responses to the pipe we created.

last step is:

scp -P your_public_port payload.tar.gz user@remote_host:/dest/folder
no more sigsegv
  • 468
  • 5
  • 17
0

You can use the DOCKER_HOST environment variable and rsync to archive your goal.

First, you set DOCKER_HOST, which causes your docker client (i.e., the docker CLI util) to be connected to the remote server's docker daemon over SSH. This probably requires you to create an ssh-config entry for the destination server.

export DOCKER_HOST="ssh://<your-host-name>"

Next, you can use docker exec in conjunction with rsync to copy your data into the target container. This requires you to obtain the container ID via, e.g., docker ps. Note, that rsync must be installed in the container.

# 
rsync -ar -e 'docker exec -i' <local-source-path> <container-id>:/<destintaion-in-the-container> 

Since rsync is used, this will also allow you to resume (if the appropriated flags are used) uploads at some point later.

AlphaBeta
  • 320
  • 1
  • 4
  • 13