1056

I've noticed with docker that I need to understand what's happening inside a container or what files exist in there. One example is downloading images from the docker index - you don't have a clue what the image contains so it's impossible to start the application.

What would be ideal is to be able to ssh into them or equivalent. Is there a tool to do this, or is my conceptualisation of docker wrong in thinking I should be able to do this.

dreftymac
  • 31,404
  • 26
  • 119
  • 182
user2668128
  • 39,482
  • 8
  • 27
  • 34
  • 22
    In the latest versions of Docker, something like this is possible: `docker exec bash`. So, you just open a shell inside the container. – dashohoxha Feb 11 '16 at 07:19
  • 17
    running bash on a container only works if bash is installed inside the container – Christopher Thomas Jun 02 '18 at 10:30
  • 17
    Similarly, you can do: `docker exec ls ` and `docker exec cat `. For bash however, add the `-it` options. – Noam Manos Jul 10 '18 at 10:55
  • Similar question: https://stackoverflow.com/questions/44769315/how-to-see-docker-image-contents – Vadzim Nov 05 '18 at 19:18
  • 6
    @ChristopherThomas, exactly. Because of that I've found that the only robust way to do this is with `docker image save image_name > image.tar` as indicated in the response from @Gaurav24. – Jaime Hablutzel Nov 24 '18 at 22:32
  • It Docker were every going to provide a UI then having a file browser for running containers would be a good feature to have in there. – William Entriken Jul 26 '19 at 21:44
  • If you want to output file system contents when building your docker file, [see this post](https://stackoverflow.com/a/34215313/97803) – David Yates Sep 11 '19 at 19:56
  • The most glaring omission from the Docker offerings. The suggestions are helpful, but not if the image doesn't build! Sometimes we just need to see the directory structure. – Kraken Jul 21 '23 at 12:54

32 Answers32

1088

Here are a couple different methods...

A) Use docker exec (easiest)

Docker version 1.3 or newer supports the command exec that behave similar to nsenter. This command can run new process in already running container (container must have PID 1 process running already). You can run /bin/bash to explore container state:

docker exec -t -i mycontainer /bin/bash

see Docker command line documentation

B) Use Snapshotting

You can evaluate container filesystem this way:

# find ID of your running container:
docker ps

# create image (snapshot) from container filesystem
docker commit 12345678904b5 mysnapshot

# explore this filesystem using bash (for example)
docker run -t -i mysnapshot /bin/bash

This way, you can evaluate filesystem of the running container in the precise time moment. Container is still running, no future changes are included.

You can later delete snapshot using (filesystem of the running container is not affected!):

docker rmi mysnapshot

C) Use ssh

If you need continuous access, you can install sshd to your container and run the sshd daemon:

docker run -d -p 22 mysnapshot /usr/sbin/sshd -D
 
# you need to find out which port to connect:
docker ps

This way, you can run your app using ssh (connect and execute what you want).

D) Use nsenter

Use nsenter, see Why you don't need to run SSHd in your Docker containers

The short version is: with nsenter, you can get a shell into an existing container, even if that container doesn’t run SSH or any kind of special-purpose daemon

Benjamin Loison
  • 3,782
  • 4
  • 16
  • 33
Jiri
  • 16,425
  • 6
  • 52
  • 68
  • 12
    but note if you need access to files use the "docker cp" command Usage: docker cp CONTAINER:PATH HOSTPATH Copy files/folders from the containers filesystem to the host path. Paths are relative to the root of the filesystem. #> docker cp 7bb0e258aefe:/etc/debian_version . #> docker cp blue_frog:/etc/hosts . – Amos Folarin Apr 24 '14 at 11:37
  • Method 1 and method 4 are giving me different results?! Not sure why :/ – Sven May 05 '16 at 10:30
  • @Sven different how? Maybe the environment is passed differently – Janus Troelsen Dec 30 '16 at 13:11
  • @GünterZöchbauer how will that work if there is no shell executable in the container? – Janus Troelsen Dec 30 '16 at 13:15
  • @JanusTroelsen I don't think so. Perhaps this is of any help to you http://stackoverflow.com/questions/27873312/docker-exec-versus-nsenter-any-gotchas I don't know `nsenter` but I guess you'd need a shell as well, how would you otherwise execute commands like `ls` or `cd` to investigate the container. – Günter Zöchbauer Dec 30 '16 at 13:21
  • 4
    Option 4 is so important that it should be moved to the top and renamed `Option 1`. – automorphic Feb 23 '17 at 02:26
  • 5
    @JanusTroelsen If there is no shell you can install it - for instance in dockerfile for alpine linux (which indeed doesn't have shell) by: `RUN apk update && apk add bash` (size: ~4MB) – Kamil Kiełczewski May 11 '17 at 18:55
  • 6
    on my own experience, the limitation with Docker exec is that the command has to be added on a running container or as a kind of entrypoint. Hence a stopped container is out of scope of this method. – Webwoman Sep 13 '18 at 18:16
  • I tried that and getting "Error response from daemon: Cannot start container 65a545b492c33150be5b04acf3dd181b5857c9ae1e53805a677ef1b7640843c7: iptables failed: iptables -t nat -A DOCKER -p tcp -d 0/0 --dport 10091 -j DNAT --to-destination 172.17.0.33:10091 ! -i docker0: iptables: No chain/target/match by that name. (exit status 1) " so my image is broken now :-( – Jose Manuel Gomez Alvarez Sep 23 '19 at 16:57
  • On Windows' Git Bash, method 4 with a slight modification: `winpty docker exec -t -i cvat bash` – Eric Grinstein Nov 04 '19 at 14:38
  • 9
    To use Window's linux shell use `docker exec -t -i mycontainer /bin/sh` – Jason Masters Nov 11 '19 at 00:49
  • I understand this answer works but how to exit from the bash shell? If I just do Ctrl-D to stop the shell will that cause any problems to the running container? As I have production containers and I don't want to touch any other running processes on it. I just want to inspect some files. – fhcat May 13 '20 at 18:35
  • @fhcat Yes, you can do that with docker exec, just be careful. Container will exit only when process PID=1 exits. In case of doubt, you can use snapshoting - it is totally safe because you exploring copy of real container. – Jiri May 14 '20 at 08:23
  • what if the container doesn't have bash, or apk, or anything like that installed? – Thayne Aug 28 '20 at 20:24
  • I have a nodejs docker container. So bash is not installed and I get the following error ``` OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "exec: \"bash\": executable file not found in $PATH": unknown ``` – ankitjena Aug 31 '20 at 11:42
  • Link https://docs.docker.com/v1.3/reference/commandline/cli/#exec is broken – Michael Freidgeim Oct 25 '20 at 09:47
  • 1
    The container I want to explore has no bash, no SSH and not even a /bin/sh. It just contains a single executable. And I don't want to snapshot it, because the reason why I want to manipulate its filesystem is that I want to shred a confidential file I accidentally stored in the container filesystem. Snapshotting it would just create another copy of it. – SOFe Mar 10 '21 at 00:42
428

UPDATE: EXPLORING!

This command should let you explore a running docker container:

docker exec -it name-of-container bash

The equivalent for this in docker-compose would be:

docker-compose exec web bash

(web is the name-of-service in this case and it has tty by default.)

Once you are inside do:

ls -lsa

or any other bash command like:

cd ..

This command should let you explore a docker image:

docker run --rm -it --entrypoint=/bin/bash name-of-image

once inside do:

ls -lsa

or any other bash command like:

cd ..

The -it stands for interactive... and tty.


This command should let you inspect a running docker container or image:

docker inspect name-of-container-or-image

You might want to do this and find out if there is any bash or sh in there. Look for entrypoint or cmd in the json return.

NOTE: This answer relies on commen tool being present, but if there is no bash shell or common tools like ls present you could first add one in a layer if you have access to the Dockerfile: example for alpine:

RUN apk add --no-cache bash

Otherwise if you don't have access to the Dockerfile then just copy the files out of a newly created container and look trough them by doing:

docker create <image>  # returns container ID the container is never started.
docker cp <container ID>:<source_path> <destination_path>
docker rm <container ID>
cd <destination_path> && ls -lsah

see docker exec documentation

see docker-compose exec documentation

see docker inspect documentation

see docker create documentation

Khalil Gharbaoui
  • 6,557
  • 2
  • 19
  • 26
  • 2
    This is extremely useful, thanks! I need to drag and drop a file contained inside a docker image file structure into an application, but that won't be possible unless it's be opened in a GUI format. Any idea how I could work around that? – Arkya Nov 30 '16 at 09:26
  • @ArkyaChatterjee You could just copy from source to destination. With `docker cp` Maybe with a 1 step in between. Read and play around with [this](https://docs.docker.com/engine/reference/commandline/cp/) a little and you will get it! – Khalil Gharbaoui Dec 04 '16 at 05:09
  • 4
    It should be fairly obvious that this will only work on a container that has bash installed. – Software Engineer May 13 '17 at 00:00
  • @engineer-dollery obviously yes bash, zsh, ksh, tcsh, fish or whatever command shell you have available or prefer would work. – Khalil Gharbaoui Mar 17 '18 at 20:17
  • 4
    For anyone looking at how to do this on a Windows Container/Powershell, the command is `docker exec -ti powershell` ([source](https://forums.docker.com/t/using-powershell-on-host-to-connect-to-container/25856/2)) – ssell Apr 26 '18 at 20:36
  • 1
    @ssell my container/image did not have powershell for some reason so `docker exec -ti cmd` worked. And for other newbies like myself make sure to use the container instance name from `docker ps` (something like 070494393ca5) rather than the readable name you assigned it. – Simon_Weaver Aug 25 '18 at 23:58
  • 1
    regarding powershell in images https://github.com/aspnet/aspnet-docker/issues/362 - and if you only need curl on windows images : https://blogs.technet.microsoft.com/virtualization/2017/12/19/tar-and-curl-come-to-windows/ – Simon_Weaver Aug 26 '18 at 04:55
  • 1
    Most helpful answer -- my image was running the app directly, couldn't get it to run bash. Needed to know the --entrypoint=/bin/bash trick! – BobHy May 25 '19 at 04:04
  • 1
    yeah this was the most helpful anwer. – Antonow297296 Sep 04 '19 at 18:18
  • 1
    `docker cp` was the only solution in my case: the image was so minimal that there was no shell installed at all. `docker cp 50b48fab2336:/ /tmp/50b48fab2336/ && cd /tmp/50b48fab2336/ && ls -la` worked perfectly. – joshfindit Jun 14 '23 at 12:11
235

In case your container is stopped or doesn't have a shell (e.g. hello-world mentioned in the installation guide, or non-alpine traefik), this is probably the only possible method of exploring the filesystem.

You may archive your container's filesystem into tar file:

docker export adoring_kowalevski > contents.tar

Or list the files:

docker export adoring_kowalevski | tar t

Do note, that depending on the image, it might take some time and disk space.

x-yuri
  • 16,722
  • 15
  • 114
  • 161
Ilya Murav'jov
  • 2,467
  • 1
  • 10
  • 3
  • 13
    I simply wanted to list the contents of a container that doesn't have standard UNIX tools installed. A variation of the `export` example above hit the spot: `docker export adoring_kowalevski | tar tf -` – berto Jan 09 '16 at 16:01
  • 3
    A warning to the unwary: this might export a _lot_ of data (> GB) and take a long time. – Vince Bowdren Mar 24 '17 at 17:48
  • 6
    @berto not that it's a massive thing, but you shouldn't need the `f -` at the end of your command, tar reads from standard input by default. Simply `docker export adoring_kowalevski | tar t` works. – Shaun Bouckaert Jul 20 '17 at 13:47
  • 4
    @ShaunBouckaert the default for `tar f` is dependent on one's configuration. One part is the `TAPE` environment variable. Others are controlled as part of the build. The net effect is that one should never assume it reads _stdin_ or writes _stdout_ but always state it explicitly. – roaima Jan 15 '19 at 13:13
  • Any alternative for images, not containers? – Alexander Shcheblikin May 11 '20 at 20:03
60

The most upvoted answer is working for me when the container is actually started, but when it isn't possible to run and you for example want to copy files from the container this has saved me before:

docker cp <container-name>:<path/inside/container> <path/on/host/>

Thanks to docker cp (link) you can copy directly from the container as it was any other part of your filesystem. For example, recovering all files inside a container:

mkdir /tmp/container_temp
docker cp example_container:/ /tmp/container_temp/

Note that you don't need to specify that you want to copy recursively.

Julius Printz
  • 701
  • 5
  • 2
  • 9
    why does this not have more +1's ! definitely the best way – Nicholas DiPiazza Mar 01 '18 at 14:50
  • This is even simpler than exporting via tar. I had to use -L to get to the files via symlinks. No need to run the container! – MKaama Jul 26 '19 at 18:01
  • This should be the accepted answer! Especially if you want to explore the file system when your docker container can not run for some reason ("debugging"). This way is simple and easy. – Armin Braunstein May 26 '22 at 09:07
58

Before Container Creation :

If you to explore the structure of the image that is mounted inside the container you can do

sudo docker image save image_name > image.tar
tar -xvf image.tar

This would give you the visibility of all the layers of an image and its configuration which is present in json files.

After container creation :

For this there are already lot of answers above. my preferred way to do this would be -

docker exec -t -i container /bin/bash
Gaurav Ingalkar
  • 1,217
  • 11
  • 20
  • 1
    See too https://sreeninet.wordpress.com/2016/06/11/looking-inside-container-images/. – Jaime Hablutzel Nov 24 '18 at 22:29
  • 4
    It should be mentioned here that running bash inside container only works if you're doing it on machine with same architecture as image. If you're on PC trying to peek into raspberry pi's image filesystem, bash trick won't work. – Maxim Kulkin Jun 27 '19 at 17:29
  • @MaximKulkin Really? If the container is Linux it doesn't matter what the host is, if bash is available. Perhaps you are thinking of Windows containers? – Thorbjørn Ravn Andersen Jul 30 '19 at 12:36
  • In some rare cases I could only enter the `sh` prompt when `bash` was not loaded in the container. – questionto42 Aug 31 '21 at 10:38
  • @ThorbjørnRavnAndersen Actually if the processor architecture is different e.g. `amd64` vs `arm64` you cannot run the container *natively*, even if the OS is the same e.g. `linux`. You can run it under an emulation layer e.g. `QEMU`. See my other comment. – Holger Böhnke Aug 10 '23 at 20:00
  • @MaximKulkin You are right, that you cannot run the container *natively*. You can however run it in a `QEMU` emulation. See https://github.com/dbhi/qus for more information. In a nutshell, run `docker run --rm --privileged aptman/qus -s -- -p` to setup the emulation layer, then run the docker container like you would run a native one. Beware this is kind of slow. Very slow at times. – Holger Böhnke Aug 10 '23 at 20:08
  • Here's https://stackoverflow.com/a/54214642/1614903 an answer what the above emulation layer setup command does. – Holger Böhnke Aug 10 '23 at 20:22
  • @HolgerBöhnke yes, the hardware needs to be compatible. __Given that__, the actual distribution used on the host is not very important. – Thorbjørn Ravn Andersen Aug 11 '23 at 11:21
48

you can use dive to view the image content interactively with TUI

https://github.com/wagoodman/dive

enter image description here

Andy Wong
  • 3,676
  • 1
  • 21
  • 18
46

The file system of the container is in the data folder of docker, normally in /var/lib/docker. In order to start and inspect a running containers file system do the following:

hash=$(docker run busybox)
cd /var/lib/docker/aufs/mnt/$hash

And now the current working directory is the root of the container.

Rovanion
  • 4,382
  • 3
  • 29
  • 49
20

Try using

docker exec -it <container-name> /bin/bash

There might be possibility that bash is not implemented. for that you can use

docker exec -it <container-name> sh
Gaurav Sharma
  • 1,983
  • 18
  • 18
20

Only for LINUX

The most simple way that I use was using proc dir, the container must be running in order to inspect the docker container files.

  1. Find out the process id (PID) of the container and store it into some variable

    PID=$(docker inspect -f '{{.State.Pid}}' your-container-name-here)

  2. Make sure the container process is running, and use the variable name to get into the container folder

    cd /proc/$PID/root

If you want to get through the dir without finding out the PID number, just use this long command

cd /proc/$(docker inspect -f '{{.State.Pid}}' your-container-name-here)/root

Tips:

After you get inside the container, everything you do will affect the actual process of the container, such as stopping the service or changing the port number.

Hope it helps

Note:

This method only works if the container is still running, otherwise, the directory wouldn't exist anymore if the container has stopped or removed

Aditya Kresna Permana
  • 11,869
  • 8
  • 42
  • 48
  • 1
    This should be higher up. My Docker host's file system was mounted as read-only, so I had no way of using `docker cp`. Instead, needed a direct path that I could pull from the host via `scp` and your solution provided me with one. Thanks! – balu Apr 15 '21 at 11:35
18

On Ubuntu 14.04 running Docker 1.3.1, I found the container root filesystem on the host machine in the following directory:

/var/lib/docker/devicemapper/mnt/<container id>/rootfs/

Full Docker version information:

Client version: 1.3.1
Client API version: 1.15
Go version (client): go1.3.3
Git commit (client): 4e9bbfa
OS/Arch (client): linux/amd64
Server version: 1.3.1
Server API version: 1.15
Go version (server): go1.3.3
Git commit (server): 4e9bbfa
piercebot
  • 1,767
  • 18
  • 16
  • Works like a charm: name= dockerId=$(docker inspect -f {{.Id}} $name) /var/lib/docker/devicemapper/mnt/$dockerId/rootfs/ – Florent Jan 05 '16 at 00:37
  • 3
    With Ubuntu 16.10 and docker 1.12.1 this is unfortunately not the case anymore (no `devicemapper`directory). The file exists under `/var/lib/docker/overlay//...`. I am not sure how portable/safe it is to access files there – WoJ Dec 01 '16 at 12:30
  • 1
    Starting from 1.10, Docker introduced a new content addressable storage model, which doesn't use randomly generated UUID, as was previously both for layer and container identifiers. In the new model this is replaced by a secure content hash for layer id. So this method will not work anymore. – Artem Dolobanko Mar 10 '17 at 12:45
  • This is not portable and depends heavily on the choice of the [storage driver](https://docs.docker.com/storage/storagedriver/select-storage-driver/). Not sure if the solution will work with `direct-lvm` for example. – rustyx Oct 23 '18 at 10:18
18

In my case no shell was supported in container except sh. So, this worked like a charm

docker exec -it <container-name> sh
shx
  • 1,068
  • 1
  • 14
  • 30
17

The most voted answer is good except if your container isn't an actual Linux system.

Many containers (especially the go based ones) don't have any standard binary (no /bin/bash or /bin/sh). In that case, you will need to access the actual containers file directly:

Works like a charm:

name=<name>
dockerId=$(docker inspect -f {{.Id}} $name)
mountId=$(cat /var/lib/docker/image/aufs/layerdb/mounts/$dockerId/mount-id)
cd /var/lib/docker/aufs/mnt/$mountId

Note: You need to run it as root.

Florent
  • 1,311
  • 1
  • 14
  • 15
  • This no longer works. The devicemapper folder isn't there. – 0xcaff Nov 06 '17 at 05:48
  • It would be nice if people with outdated answers would clean them up – Matthew Purdon Aug 15 '18 at 14:04
  • 3
    I updated the command to match the new docker storage structure. – Florent Aug 24 '18 at 00:10
  • 2
    On my system running docker 19.03 the mountId is now found in /var/lib/docker/image/overlay2/$dockerId/mount-id and the mounted filesystem resides in /var/lib/docker/overlay2/$mountId/merged/ Or you simply use the good answer from @Raphael above, which should keep working even when the way the overlay fs is used is changed again. – Alexander Stumpf Feb 26 '21 at 22:34
16

I use another dirty trick that is aufs/devicemapper agnostic.

I look at the command that the container is running e.g. docker ps and if it's an apache or java i just do the following:

sudo -s
cd /proc/$(pgrep java)/root/

and voilá you're inside the container.

Basically you can as root cd into /proc/<PID>/root/ folder as long as that process is run by the container. Beware symlinks will not make sense wile using that mode.

telamon
  • 415
  • 1
  • 6
  • 6
13

None of the existing answers address the case of a container that exited (and can't be restarted) and/or doesn't have any shell installed (e.g. distroless ones). This one works as long has you have root access to the Docker host.

For a real manual inspection, find out the layer IDs first:

docker inspect my-container | jq '.[0].GraphDriver.Data'

In the output, you should see something like

"MergedDir": "/var/lib/docker/overlay2/03e8df748fab9526594cfdd0b6cf9f4b5160197e98fe580df0d36f19830308d9/merged"

Navigate into this folder (as root) to find the current visible state of the container filesystem.

Raphael
  • 9,779
  • 5
  • 63
  • 94
  • Unfortunately, for me the folder is empty even though the container's file system is clearly not. :\ – balu Apr 15 '21 at 11:31
8

This will launch a bash session for the image:

docker run --rm -it --entrypoint=/bin/bash

LeYAUable
  • 1,613
  • 2
  • 15
  • 30
6

On newer versions of Docker you can run docker exec [container_name] which runs a shell inside your container

So to get a list of all the files in a container just run docker exec [container_name] ls

xrh
  • 77
  • 1
  • 3
6

I wanted to do this, but I was unable to exec into my container as it had stopped and wasn't starting up again due to some error in my code.

What worked for me was to simply copy the contents of the entire container into a new folder like this:

docker cp container_name:/app/ new_dummy_folder

I was then able to explore the contents of this folder as one would do with a normal folder.

flyer2403
  • 961
  • 7
  • 7
5

If you are using Docker v19.03, you follow the below steps.

# find ID of your running container:

  docker ps

# create image (snapshot) from container filesystem

  docker commit 12345678904b5 mysnapshot

# explore this filesystem 

  docker run -t -i mysnapshot /bin/sh
4

For me, this one works well (thanks to the last comments for pointing out the directory /var/lib/docker/):

chroot /var/lib/docker/containers/2465790aa2c4*/root/

Here, 2465790aa2c4 is the short ID of the running container (as displayed by docker ps), followed by a star.

dashohoxha
  • 221
  • 1
  • 5
4

For docker aufs driver:

The script will find the container root dir(Test on docker 1.7.1 and 1.10.3 )

if [ -z "$1" ] ; then
 echo 'docker-find-root $container_id_or_name '
 exit 1
fi
CID=$(docker inspect   --format {{.Id}} $1)
if [ -n "$CID" ] ; then
    if [ -f  /var/lib/docker/image/aufs/layerdb/mounts/$CID/mount-id ] ; then
        F1=$(cat /var/lib/docker/image/aufs/layerdb/mounts/$CID/mount-id)
       d1=/var/lib/docker/aufs/mnt/$F1
    fi
    if [ ! -d "$d1" ] ; then
        d1=/var/lib/docker/aufs/diff/$CID
    fi
    echo $d1
fi
qxo
  • 1,584
  • 15
  • 11
4

This answer will help those (like myself) who want to explore the docker volume filesystem even if the container isn't running.

List running docker containers:

docker ps

=> CONTAINER ID "4c721f1985bd"

Look at the docker volume mount points on your local physical machine (https://docs.docker.com/engine/tutorials/dockervolumes/):

docker inspect -f {{.Mounts}} 4c721f1985bd

=> [{ /tmp/container-garren /tmp true rprivate}]

This tells me that the local physical machine directory /tmp/container-garren is mapped to the /tmp docker volume destination.

Knowing the local physical machine directory (/tmp/container-garren) means I can explore the filesystem whether or not the docker container is running. This was critical to helping me figure out that there was some residual data that shouldn't have persisted even after the container was not running.

Garren S
  • 5,552
  • 3
  • 30
  • 45
  • 1
    This only finds a local directory that is mounted as a volume inside the container but does not allow accessing container's entire file system. – Bojan Komazec Jul 25 '19 at 10:37
3

For an already running container, you can do:

dockerId=$(docker inspect -f {{.Id}} [docker_id_or_name])

cd /var/lib/docker/btrfs/subvolumes/$dockerId

You need to be root in order to cd into that dir. If you are not root, try 'sudo su' before running the command.

Edit: Following v1.3, see Jiri's answer - it is better.

0x90
  • 39,472
  • 36
  • 165
  • 245
AlonL
  • 6,100
  • 3
  • 33
  • 32
  • 4
    I'm strongly partial to "sudo -i" rather than "sudo su" because there's little reason to run a suid program which launches another suid program which launches a shell. Cut out the middle man. :) – dannysauer Aug 21 '14 at 21:39
  • Your answer is very good, only the path isn't. You should use piercebot's path. – Florent Jan 05 '16 at 00:37
3

another trick is to use the atomic tool to do something like:

mkdir -p /path/to/mnt && atomic mount IMAGE /path/to/mnt

The Docker image will be mounted to /path/to/mnt for you to inspect it.

Giuseppe Scrivano
  • 1,385
  • 10
  • 13
  • But you need to have specially made containers for this to work, right? Maybe you should add it as a caveat, cause most people wont be able to sell it to their team/company as a solution... – Angelos Pikoulas Oct 10 '18 at 17:53
2

My preferred way to understand what is going on inside container is:

  1. expose -p 8000

    docker run -it -p 8000:8000 image
    
  2. Start server inside it

    python -m SimpleHTTPServer
    
m00am
  • 5,910
  • 11
  • 53
  • 69
kgnete
  • 214
  • 2
  • 5
2

Often times I only need to explore the docker filesystem because my build won't run, so docker run -it <container_name> bash is impractical. I also do not want to waste time and memory copying filesystems, so docker cp <container_name>:<path> <target_path> is impractical too.

While possibly unorthodox, I recommend re-building with ls as the final command in the Dockerfile:

CMD [ "ls", "-R" ]
tim-montague
  • 16,217
  • 5
  • 62
  • 51
1

If you are using the AUFS storage driver, you can use my docker-layer script to find any container's filesystem root (mnt) and readwrite layer :

# docker-layer musing_wiles
rw layer : /var/lib/docker/aufs/diff/c83338693ff190945b2374dea210974b7213bc0916163cc30e16f6ccf1e4b03f
mnt      : /var/lib/docker/aufs/mnt/c83338693ff190945b2374dea210974b7213bc0916163cc30e16f6ccf1e4b03f

Edit 2018-03-28 :
docker-layer has been replaced by docker-backup

Vince
  • 3,274
  • 2
  • 26
  • 28
1

The docker exec command to run a command in a running container can help in multiple cases.

Usage:  docker exec [OPTIONS] CONTAINER COMMAND [ARG...]

Run a command in a running container

Options:
  -d, --detach               Detached mode: run command in the background
      --detach-keys string   Override the key sequence for detaching a
                             container
  -e, --env list             Set environment variables
  -i, --interactive          Keep STDIN open even if not attached
      --privileged           Give extended privileges to the command
  -t, --tty                  Allocate a pseudo-TTY
  -u, --user string          Username or UID (format:
                             [:])
  -w, --workdir string       Working directory inside the container

For example :

  1. Accessing in bash to the running container filesystem :

    docker exec -it containerId bash

  2. Accessing in bash to the running container filesystem as root to be able to have required rights :

    docker exec -it -u root containerId bash

This is particularly useful to be able to do some processing as root in a container.

  1. Accessing in bash to the running container filesystem with a specific working directory :

    docker exec -it -w /var/lib containerId bash

Benjamin Loison
  • 3,782
  • 4
  • 16
  • 33
davidxxx
  • 125,838
  • 23
  • 214
  • 215
1

I've found the easiest, all-in-one solution to View, Edit, Copy files with a GUI app inside almost any running container.

mc editing files in docker

  1. inside the container install mc and ssh: docker exec -it <container> /bin/bash, then with prompt install mc and ssh packages
  2. in same exec-bash console, run mc
  3. press ESC then 9 then ENTER to open menu and select "Shell link..."
  4. using "Shell link..." open SCP-based filesystem access to any host with ssh server running (including the one running docker) by it's IP address
  5. do your job in graphical UI

this method overcomes all issues with permissions, snap isolation etc., allows to copy directly to any machine and is the most pleasant to use for me

grandrew
  • 698
  • 7
  • 12
1

I had an unknown container, that was doing some production workload and did not want to run any command.

So, I used docker diff.

This will list all files that the container had changed and therefore good suited to explore the container file system.

To get only a folder you can just use grep:

docker diff <container> | grep /var/log

It will not show files from the docker image. Depending on your use case this can help or not.

sschoof
  • 1,531
  • 1
  • 17
  • 24
1

Late to the party, but in 2022 we have VS Code

Brainware
  • 531
  • 6
  • 15
  • 2
    This extension is closed source, VSCode itself is privacy-unfriendly, and it is overkill if a person only needs to explore files. – Nairum Mar 14 '23 at 08:57
0

You can run a bash inside the container with this: $ docker run -it ubuntu /bin/bash

Yang Yu
  • 312
  • 3
  • 8
-3

Practically all containers I use have Python, so I attach to the container,

pip install jupyterlab
cd /
jupyter lab --allow-root

I ^click the link that the Jupyter Lab server offers and in the host's browser I have the perfect GUI for the file system and can open all kinds of files (ipnb, py, md (in preview),...)

Cheers
G.

gue22
  • 567
  • 4
  • 8