273

My question is related to this question on copying files from containers to hosts; I have a Dockerfile that fetches dependencies, compiles a build artifact from source, and runs an executable. I also want to copy the build artifact (in my case it's a .zip produced by sbt dist in '../target/`, but I think this question also applies to jars, binaries, etc.

docker cp works on containers, not images; do I need to start a container just to get a file out of it? In a script, I tried running /bin/bash in interactive mode in the background, copying the file out, and then killing the container, but this seems kludgey. Is there a better way?

On the other hand, I would like to avoid unpacking a .tar file after running docker save $IMAGENAME just to get one file out (but that seems like the simplest, if slowest, option right now).

I would use docker volumes, e.g.:

docker run -v hostdir:out $IMAGENAME /bin/cp/../blah.zip /out

but I'm running boot2docker in OSX and I don't know how to directly write to my mac host filesystem (read-write volumes are mounting inside my boot2docker VM, which means I can't easily share a script to extract blah.zip from an image with others. Thoughts?

Community
  • 1
  • 1
Mark
  • 2,749
  • 2
  • 12
  • 5
  • `save` is the only option, if do not have a runnable image, e.g. an image `FROM scratch` with `COPY --from ...` lines, that does not contain e.g. `bash` and has no `ENTRYPOINT`. The reason is that `docker container create` fails on those images. – Holger Böhnke Jun 16 '23 at 12:27

12 Answers12

378

To copy a file from an image, create a temporary container, copy the file from it and then delete it:

id=$(docker create image-name)
docker cp $id:path - > local-tar-file
docker rm -v $id
mit
  • 11,083
  • 11
  • 50
  • 74
Igor Bukanov
  • 4,636
  • 3
  • 16
  • 23
  • What version of docker was the `create` command added/removed (its not present in 1.01) – ThorSummoner Aug 23 '15 at 06:20
  • 2
    @ThorSummoner `docker create` was introduced in docker 1.3, https://blog.docker.com/2014/10/docker-1-3-signed-images-process-injection-security-options-mac-shared-directories/ – Igor Bukanov Sep 21 '15 at 09:59
  • This did not work for me. Specifically, the `docker create` command listed is insufficient for docker 16.04, requiring more arguments at minimum. – Chris Cleeland May 01 '17 at 14:47
  • 2
    @ChrisCleeland Does it work in your case when you add `--entrypoint /` arguments to the docker create command? – Igor Bukanov May 02 '17 at 15:50
  • This is good @IgorBukanov. I'm a semi-newbie at docker, and I was having a hard time figuring out how to view the content of an _image_ without starting a _container_. The answer - which you provided here - is, create a container, but don't start it. Thanks! – fool4jesus Jul 12 '18 at 21:06
  • This is only method out of these that worked for me. the --entrypoint "cp" method resulted in a file not found when it tried to copy from mount. seemed like some sort of race condition with the mount being available and the entrypoint executing as it worked if I shelled in and then executed the copy. – GameSalutes Feb 19 '19 at 14:22
  • 1
    Definitely the right answer!!! Doesn't rely on anything inside the container... For Golang scratch images, this is the only way possible! – Marcello DeSales Aug 22 '19 at 07:28
  • really useful answer but I prefer using --name param over id env var to identify the created docker container – Andrés Alcarraz Nov 19 '19 at 00:06
  • 4
    any reason to copy to stdout and then direct it to a local file? when I did this, it dumped a bunch of control characters before and after the file's content. running it directly as `docker cp $id:path > local-tar-file` worked perfectly. – Yonatan May 11 '20 at 10:48
  • 3
    @Yonatan The main reason to save as a tar archive was to ensure that all information including ownership names are extracted and to allow to examine the archive locally before extracting it for extra safety. Plus earlier docker versions IIRC did not support or had some issues with copying a directory from the container to a local directory. But if you know the structure of your container and your docker version is not ancient one, surely you can just use docker cp $id:path local_path to extract into a local file or directory. – Igor Bukanov May 12 '20 at 16:25
  • To be extracted with `tar -xvf ` – isapir Dec 20 '22 at 19:15
114

Unfortunately there doesn't seem to be a way to copy files directly from Docker images. You need to create a container first and then copy the file from the container.

However, if your image contains a cat command (and it will do in many cases), you can do it with a single command:

docker run --rm --entrypoint cat yourimage  /path/to/file > path/to/destination

If your image doesn't contain cat, simply create a container and use the docker cp command as suggested in Igor's answer.

fons
  • 4,905
  • 4
  • 29
  • 49
  • 6
    Fantastic solution. Couldn't access my container since it crashed a second after launching, but needed to grab a file within it. This worked perfectly. – Mirodinho Aug 02 '17 at 12:42
  • FWIW, when doing this it recently started breaking for me. Basically it would lose some bytes at the end. I fixed it with a sleep like so: `docker run --rm --entrypoint bash yourimage -c 'cat /path/to/file; sleep 1' > path/to/destination`. This was docker 24.0.5. – kleptog Aug 01 '23 at 13:34
114
docker cp $(docker create --name tc registry.example.com/ansible-base:latest):/home/ansible/.ssh/id_rsa ./hacked_ssh_key && docker rm tc

for rhel/fedora:

podman cp $(podman create --name tc docker.io/alpine/curl):/usr/bin/curl ./curl && podman rm tc

wanted to supply a one line solution based on pure docker functionality (no bash needed)

edit: container does not even has to be run in this solution

edit2: thanks to @Jonathan Dumaine for --rm so the container will be removed after, i just never tried, because it sounded illogical to copy something from somewhere which has been already removed by the previous command, but i tried it and it works

edit3: due the comments we found out --rm is not working as expected, it does not remove the container because it never runs, so I added functionality to delete the created container afterwards(--name tc=temporary-container)

edit 4: this error appeared, seems like a bug in docker, because t is in a-z and this did not happen a few months before.

Error response from daemon: Invalid container name (t), only [a-zA-Z0-9][a-zA-Z0-9_.-] are allowed
Elytscha Smith
  • 1,351
  • 1
  • 7
  • 11
  • 6
    Underrated answer! It also works if you add `--rm` to the `docker create` call to leave no trace of the temporary container. – Jonathan Dumaine Sep 17 '20 at 21:37
  • 1
    thanks i edited this, it just sounded strange to --rm it first and then copy the file from it, i thought it wouldn't work so i never tried, but now i tried and it works haha – Elytscha Smith Sep 18 '20 at 10:00
  • 7
    I don't think `--rm` removes anything in this case since the container never runs – Roman Usherenko Nov 11 '20 at 10:47
  • Isn't there a chance that this will remove the container before copying has completed? – Cameron Hudson Jan 11 '21 at 01:17
  • 3
    no, because as roman said, the --rm does not remove anything because the container never runs – Elytscha Smith Jan 11 '21 at 09:45
  • 2
    This has to be followed up with `docker rm` call to remove the container. – Mitar Dec 06 '21 at 12:23
  • technically bash is still needed as `$(...)` is used – m13r Jan 18 '22 at 09:58
  • @m13r i think you got that wrong, yes bash or any shell is generally needed to execute commands, but IN the container there is no shell needed with this approach, other approaches need to have bash/sh/zsh built in to work, which is not the case for all images, images built from scratch with golang binary statically compiled as example – Elytscha Smith Jan 18 '22 at 18:51
  • I think `--rm` is not doing what you want. It does not remove the container, because `--rm` only removes the container on *exit* and the container cannot exit if it doesn't start. (And if it did you couldn't copy the files from it.) So while it doesn't hurt anything here (it doesn't do anything), it's misleading. It does **not** result in the container being removed. Instead, as @Mitar said, you need to manually `docker rm` the container. – Elliott Slaughter Apr 20 '22 at 04:46
  • @ElliottSlaughter feel free to edit my answer, i added the --rm because of comment #1 – Elytscha Smith May 16 '22 at 11:50
  • @ElytschaSmith I could not submit the edit because the edit queue is full. I think you need to either approve or reject the other edits first. (Or we need an mod to do so. I do not have the necessary privilege level.) – Elliott Slaughter May 17 '22 at 15:45
  • i edited rm functionality.. – Elytscha Smith Jun 13 '22 at 14:27
  • 1
    `Error response from daemon: Invalid container name (t), only [a-zA-Z0-9][a-zA-Z0-9_.-] are allowed must specify at least one container source` – Nairum Jul 29 '22 at 13:27
51

A much faster option is to copy the file from running container to a mounted volume:

docker run -v $PWD:/opt/mount --rm --entrypoint cp image:version /data/libraries.tgz /opt/mount/libraries.tgz

real 0m0.446s

** VS **

docker run --rm --entrypoint cat image:version /data/libraries.tgz > libraries.tgz

real 0m9.014s

Jason
  • 1,587
  • 1
  • 19
  • 26
Mikko
  • 929
  • 8
  • 5
  • 1
    This probably has more to do with the underlying file system performing a lazy/shallow copy of the file (think copy-on-write) in the first example, vs actually copying the bytes of the file in the second example. A useful test would be to see if `cat a >b` vs `cp a b` have similar timings as shown here. Also, if the source path and destination path reside on different file systems, then both examples will lead to a full byte-for-byte copy. – KevinOrr Apr 14 '20 at 06:35
  • Only solution worked for me to copy a file from image. Others solutions gave stupid docker error. My docker version: `Docker version 20.10.6, build 370c289` – Saurav Kumar May 21 '21 at 17:53
21

Parent comment already showed how to use cat. You could also use tar in a similar fashion:

docker run yourimage tar -c -C /my/directory subfolder | tar x
tgogos
  • 23,218
  • 20
  • 96
  • 128
wedesoft
  • 2,781
  • 28
  • 25
  • 1
    This answer is to copy directories instead files as the original question asks about. However, +1 because it also works with files and comes with an extra feature: permission and owner preservation. Great! – caligari Nov 30 '17 at 09:06
  • 7
    Actually, I use `docker run --rm --entrypoint tar _image_ cC _img_directory_ . | tar xvC _host_directory_` – caligari Nov 30 '17 at 11:04
14

Another (short) answer to this problem:

docker run -v $PWD:/opt/mount --rm -ti image:version bash -c "cp /source/file /opt/mount/"

Update - as noted by @Elytscha Smith this only works if your image has bash built in

freedev
  • 25,946
  • 8
  • 108
  • 125
8

First pull docker image using docker pull

docker pull <IMG>:<TAG>

Then, create a container using docker create command and store the container id is a variable

img_id=$(docker create <IMG>:<TAG>)

Now, run the docker cp command to copy folders and files from docker container to host

docker cp $img_id:/path/in/container /path/in/host

Once the files/folders are moved, delete the container using docker rm

docker rm -v $img_id
5

Not a direct answer to the question details, but in general, once you pulled an image, the image is stored on your system and so are all its files. Depending on the storage driver of the local Docker installation, these files can usually be found in /var/lib/docker/overlay2 (requires root access). overlay2 should be the most common storage driver nowadays, but the path may differ.

The layers associated with an image can be found using $ docker inspect image IMAGE_NAME:TAG, look for a GraphDriver attribute.
At least in my local environment, the following also works to quickly see all layers associated with an image:
docker inspect image IMAGE_NAME:TAG | jq ".[0].GraphDriver.Data"

In one of these diff directories, the wanted file can be found.
So in theory, there's no need to create a temporary container. Ofc this solution is pretty inconvenient.

nichoio
  • 6,289
  • 4
  • 26
  • 33
  • 1
    Nice solution, It might be inconvenient but there are certainly use cases where you don't want to start a container. Note that this only works when you are using `overlay2` as storage driver. (But the same technique with other paths can be used for other storage drivers) – Garo Apr 13 '21 at 11:58
  • I actually learnt a little bit more about containers in the mean time and edited the answer accordingly. – nichoio Jun 07 '21 at 11:57
  • This was the solution I needed, because the image I'm working with has no executables in it so it can't be run because there's no command that can be run in it. It's used purely as a source for copying files, but that involves building another image to get at them. This solution was by far the quickest and easiest way to get them out for debugging. – Neil Mayhew May 20 '23 at 02:10
4

Update - here's a better version without the tar file:

$id = & docker create image-name
docker cp ${id}:path .
docker rm -v $id

Old answer PowerShell variant of Igor Bukanov's answer:

$id = & docker create image-name
docker cp ${id}:path - > local-file.tar
docker rm -v $id
Tereza Tomcova
  • 4,928
  • 4
  • 30
  • 29
3

You essentially had the best solution already. Have the container copy out the files for you, and then remove itself when it's complete.

This will copy the files from /inside/container/ to your machine at /path/to/hostdir/.

docker run --rm -v /path/to/hostdir:/mnt/out "$IMAGENAME" /bin/cp -r /inside/container/ /mnt/out/
Cameron Hudson
  • 3,190
  • 1
  • 26
  • 38
  • I love SO. People give -1 and do not explain WHY. I would like to know why this is the case. – Benjamin Marwell May 04 '21 at 06:02
  • The only annoying thing with this solution is the files on the host and up being owned by root instead of the user I am using to run the docker command. Is there any way to fix that? – Joseph Garvin Dec 21 '21 at 21:29
0

I am using boot2docker on MacOS. I can assure you that scripts based on "docker cp" are portable. Because any command is relayed inside boot2docker but then the binary stream is relayed back to the docker command line client running on your mac. So write operations from the docker client are executed inside the server and written back to the executing client instance!

I am sharing a backup script for docker volumes with any docker container I provide and my backup scripts are tested both on linux and MacOS with boot2docker. The backups can be easily exchanged between platforms. Basically I am executing the following command inside my script:

docker run --name=bckp_for_volume --rm --volumes-from jenkins_jenkins_1 -v /Users/github/jenkins/backups:/backup busybox tar cf /backup/JenkinsBackup-2015-07-09-14-26-15.tar /jenkins

Runs a new busybox container and mounts the volume of my jenkins container with the name jenkins_jenkins_1. The whole volume is written to the file backups/JenkinsBackup-2015-07-09-14-26-15.tar

I have already moved archives between the linux container and my mac container without any adjustments to the backup or restore script. If this is what you want you find the whole script an tutorial here: blacklabelops/jenkins

blacklabelops
  • 4,708
  • 5
  • 25
  • 42
0

You could bind a local path on the host to a path on the container, and then cp the desired file(s) to that path at the end of your script.

$ docker run -d \
  -it \
  --name devtest \
  --mount type=bind,source="$(pwd)"/target,target=/app \
  nginx:latest

Then there is no need to copy afterwards.

ryanjdillon
  • 17,658
  • 9
  • 85
  • 110