0

So I want to mount my Docker container on my Windows PC using a Dockerfile. So far I have been able to do this using the following command:

docker run -v %userprofile%\mounted-docker\:/tmp/ container-name

This would mount /tmp/ from my Docker container into my C:\Users\USERNAME\mounted-docker\ folder. However, I can't seem to find the equivalent instruction in the Dockerfile documentation.

The only documentation is probably VOLUME in the Dockerfile documentation, which specifies:

Volumes on Windows-based containers: When using Windows-based containers, the destination of a volume inside the container must be one of:

a non-existing or empty directory
a drive other than C:

That's fine and all... but how exactly do I specify that? Let's say I want to mount either / or /tmp/ in a specified folder or drive, how do I do that?

Community
  • 1
  • 1
MortenMoulder
  • 6,138
  • 11
  • 60
  • 116
  • Possible duplicate of [How to mount host volumes into docker containers in Dockerfile during build](https://stackoverflow.com/questions/26050899/how-to-mount-host-volumes-into-docker-containers-in-dockerfile-during-build) – Erik Dannenberg Mar 26 '18 at 12:40
  • @ErikDannenberg I do not care about the host directory. The container should not know about the host. – MortenMoulder Mar 26 '18 at 12:41
  • The question and above comment seem to contradict each other. Mounting a volume from the host into the container is what the `-v` option you provided above does. Providing that flag implicitly gives the container access to the host directory. – BMitch Mar 26 '18 at 12:45

1 Answers1

1

The Dockerfile is used to build the image. To define how you'd like to run that image, you'll want to use a docker-compose.yml file.

In a Dockerfile, you cannot specify where a volume will be mounted from in the host. Doing so would open up docker to malicious image exploits where images from the Docker hub could mount the root filesystem and send private content to remote locations, or even perform a ransomware exploit. Specifying what elevated access a container can have is left up to the user running the image, from docker run or with the docker-compose.yml file.

BMitch
  • 231,797
  • 42
  • 475
  • 450
  • 1
    See also this blog post on why I don't like defining a `VOLUME` at all in the Dockerfile: https://boxboat.com/2017/01/23/volumes-and-dockerfiles-dont-mix/ – BMitch Mar 26 '18 at 12:41
  • 1
    Just to clarify: I'm not the one who downvoted you. Anyways, I'm simply doing this because I want to access the generated files from my microservice. Is there a better way to access them? – MortenMoulder Mar 26 '18 at 12:43
  • Not sure the downvoter even read the answer it came so fast. How to achieve your goal depends on how you want to access those files. Both options I can think of, pushing to a volume or serving the files over a port, both require options on the `docker run` cli. If you haven't already, you'll soon find you want a compose file for running anything but the most basic containers. – BMitch Mar 26 '18 at 12:49
  • It's just a one-time thing I want to do. I've already uploaded a few files using cURL, but it's just too slow and I'm genuinely interested in knowing how to do this. I've thought about simply `docker exec -it` into it, then install a small SSH or FTP server, so I can (S)FTP into it. Just for the heck of it. – MortenMoulder Mar 26 '18 at 12:56
  • If you're up for an exec, then you could `tar` the contents and using stdout redirect, e.g.: `docker exec ${container} tar -cC /tmp . | tar -x`. Or you can use `docker cp` to copy files from the container to the host. These would be a copy of the files rather than the current state as they may be changing inside the container. – BMitch Mar 26 '18 at 13:01
  • 1
    Aah yeah, I could simply do `docker cp CONTAINER:/tmp/ C:\Users\USERNAME\folder-name\` and it would copy `/tmp/` from the container to my own folder at that destination specified. Awesome! Thanks a lot. – MortenMoulder Mar 26 '18 at 13:13