2

I am trying to use docker for development by mounting a folder from the container to host, as the standard approach of host-to-container doesn't work well for a certain project I am working upon.

Currently, I do that using bindfs (which also maps the user permission) as suggested in this issue:

pid=$( docker inspect -f '{{.State.Pid}}' "$container")
root=/proc/$pid/root
sudo bindfs --map=1000/"$(id -u)" "$root$source" "$target"

However, taking rootfs from proc seems very fragile as it depends on the pid of the process. Is there an alternative way to do this?

If there is a way of finding the rootfs -- regardless of the storage-driver used, I could use that in bindfs instead. Where is the rootfs of container in host machine after docker 1.6.0 says it could vary according to the storage-driver used, but doesn't say how to get it.

I am really afraid to use a solution that relies on a specific storage-driver, due to performance reasons. I am also wondering if this is even possible because it is a "union filesystem" - so will there be a single "static" rootfs at all?

Nishant
  • 20,354
  • 18
  • 69
  • 101
  • 1
    Are you using `docker` / developing directly on a Linux host? Otherwise remapping of permission with `bindfs` might be unnecessary since these are mapped between the virtual `docker` host (Linux in a VM) and your host (macOS/Windows) anyway. – acran Nov 24 '20 at 11:39
  • 1
    Could you explain a bit more why a host-to-container mount wouldn't work for you/what are the specifics of your use case to make it the other way around? Also why do you think using `/proc/$pid/rootfs` to be _fragile_? To me this actually seems to be a reasonable solution. – acran Nov 24 '20 at 11:41
  • @acran, Thanks for checking. I will explore on the first point. Are you referring to `/mnt/C` that gets mounted automatically in `WSL`? I don't see that in `Hyper-V`. The reason *host-to-container* mount doesn't work is: we have lots of repositories that needs to be in development mode; plus it helps if we expose the config file and wheels directory to the end user. Giving something tested to use is much better than mounting something which might not always work (for all sorts of reasons -- our CMS is like that). As for `pid`, it changes when you stop-start the container (still works though)! – Nishant Nov 24 '20 at 13:13

1 Answers1

1

If I understand correctly you don't necessarily want to access the whole filesystem of the container but rather only relevant directories containing the application.

If your main intend is to allow shipping your run-time environment as a single bundled container image but allowing your users to access and modify application files then using an ordinary bind-volume and copying the files on startup would by the easiest way in my opinion, i.e.

docker run -v $PWD:/app-data/ your_app

This will bind-mount your current directory as (an empty) /app-data/ into the container. Then in the container you need to copy the application files into that directory (if not already present):

#!/bin/bash

# script /docker-entrypoint.sh

# test if volume is already initialized
# e.g. see if src directory exists
if ! [ -d /app-data/src ]; then
  # copy all files from shipped /app-dist/ to the actual run-time location
  cp -a /app-dist/* /app-data/
fi

# continue executing the command
exec "$@"

This will copy the shipped application files when not already present into the mounted volume on the host where the user can access and edit them. If you always want to use the latest files from the current image you can just cp them unconditionally. The required files need to be put into /app-dist/ in the Dockerfile

The benefit of this approach is that it is very easy to support since it uses ordinary volumes. The drawback is of course the increased startup time since all the files have to be copied first.

The next best approach would be to use unnamed volumes and bind-mount them to an accessible path:

# start container with volume
docker run -d -v /app-data/ --name your_app_container your_app

# get underlying volume path of /app-data on the host
VOLUME_HOST_PATH="$(docker inspect -f '{{range .Mounts}}{{if eq .Destination "/app-data"}}{{.Source}}{{end}}{{end}}' your_app_container)"

# bind-mount the volume path to a user-accessible path
sudo mount --bind "$VOLUME_HOST_PATH" "$target"

Instead of starting the container with an explicit -v you can also use a VOLUME in the Dockerfile.

The advantage of this approach is that docker will do the copying for you when initializing the volume - so no need for a custom entrypoint - and you won't depend on the $pid of the container as in your solution.

But the drawback of increased startup time stays as the files still need to be copied. Also this might not reliably work for all storage drivers.

Lastly your own solution by bind-mounting the containers /proc/$pid/root/ should work with all storage drivers since /proc/$pid/root/ gives you access to the whole filesystem as seen by the container, i.e. with all additional (bind-)mounts and volumes within its namespace.

In any case using bindfs should not be necessary when sharing the volumes between the docker host and the actual macOS / Windows since the mapping of access permission is done automatically between the different operating systems.

At the same time this may prohibit the latter solutions since the bind-mounting will only work inside the Linux VM used as the docker host under the hood in this kind of setups which will not translate to a mapped path on the macOS/Windows host.

Addendum

Another approach that just crossed my mind: exposing filesystem access via the network.

You could add another service providing file access via a network protocol such as FTP, SFTP or SMB - or integrate in a existing service. This would eliminate unnecessary copying of data and will work with all setups and storage drivers since all it needs is to expose a network port.

The "downside" of this is that this will not (automatically) map the volumes into the local filesystem of the host. This may or may not be a problem for your use case.

acran
  • 7,070
  • 1
  • 18
  • 35
  • @arcan, thanks, I will check up on the unnamed volume stuff (didn't know it copies automatically); the problem is that my /apps is almost 2 gigs in size :-). Also, they might need 5-10 such containers with minor modifications! Eventually, keeping both possibilities open seems like a good solution. (I do all this via a helper script.) – Nishant Nov 24 '20 at 20:22
  • 1
    I expanded my answer with another solution using network shares. – acran Nov 29 '20 at 00:51
  • Yeah, that's a good one. The only down side is the need for something like `samba` running insider the container. But yeah, I will think about this again. – Nishant Nov 29 '20 at 13:01