91

I am running a Jenkins cluster where in the Master and Slave, both are running as a Docker containers.

The Host is latest boot2docker VM running on MacOS.

To allow Jenkins to be able to perform deployment using Docker, I have mounted the docker.sock and docker client from the host to the Jenkins container like this :-

docker run -v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):/usr/bin/docker -v $HOST_JENKINS_DATA_DIRECTORY/jenkins_data:/var/jenkins_home -v $HOST_SSH_KEYS_DIRECTORY/.ssh/:/var/jenkins_home/.ssh/ -p 8080:8080 jenkins

I am facing issues while mounting a volume to Docker containers that are run inside the Jenkins container. For example, if I need to run another Container inside the Jenkins container, I do the following :-

sudo docker run -v $JENKINS_CONTAINER/deploy.json:/root/deploy.json $CONTAINER_REPO/$CONTAINER_IMAGE 

The above runs the container, but the file "deploy.json" is NOT mounted as a file, but instead as a "Directory". Even if I mount a Directory as a Volume, I am unable to view the files in the resulting container.

Is this a problem, because of file permissions due to Docker in Docker case?

tshepang
  • 12,111
  • 21
  • 91
  • 136
ZephyrPLUSPLUS
  • 2,550
  • 2
  • 18
  • 13
  • 1
    I'm having the same problem when running Docker on an EC2 host, with `docker.sock` mounted so that the container can use the host Docker. It looks like your answer below is correct - the volume that appears in the inner-most container contains files that are from the EC2 host. – Sherwood Callaway Jan 09 '19 at 18:45

11 Answers11

105

A Docker container in a Docker container uses the parent HOST's Docker daemon and hence, any volumes that are mounted in the "docker-in-docker" case is still referenced from the HOST, and not from the Container.

Therefore, the actual path mounted from the Jenkins container "does not exist" in the HOST. Due to this, a new directory is created in the "docker-in-docker" container that is empty. Same thing applies when a directory is mounted to a new Docker container inside a Container.

Very basic and obvious thing which I missed, but realized as soon I typed the question.

ZephyrPLUSPLUS
  • 2,550
  • 2
  • 18
  • 13
  • 34
    so what's the solution? Because docker documentation refers to https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/ which says to use docker this way. But this way cannot mount volumes from a docker container to another. Data volumes? – Julio Guerra Feb 23 '16 at 17:54
  • 5
    @ZephyrPLUSPLUS could you post what you had and what you changed into so that others can benefit from your answer? – mhenrixon Mar 31 '16 at 16:36
  • 3
    It's great to know that you solve the problem, but what is the actual solution. How did you mount this folder? – Zhorzh Alexandr Mar 21 '17 at 15:51
  • 2
    @JulioGuerra we'd also like to know having committed to the recommended approach from that blog post which says it "looks like Docker-in-Docker [and] feels like Docker-in-Docker", but fails to mention this huge caveat! – c24w Jun 20 '17 at 17:53
  • Hello guys, did you manage to resolve the issue? I can't mount any volumes – idmitriev Jan 30 '19 at 12:04
  • 17
    This post does not actually solve the problem. It merely explains the problem further. – Isen Ng Oct 14 '19 at 06:44
32

Lots of good info in these posts but I find none of them are very clear about which container they are referring to. So let's label the 3 environments:

  • host: H
  • docker container running on H: D
  • docker container running in D: D2

We all know how to mount a folder from H into D: start D with

docker run ... -v <path-on-H>:<path-on-D> -v /var/run/docker.sock:/var/run/docker.sock ...

The challenge is: you want path-on-H to be available in D2 as path-on-D2.

But we all got bitten when trying to mount the same path-on-H into D2, because we started D2 with

docker run ... -v <path-on-D>:<path-on-D2> ...

When you share the docker socket on H with D, then running docker commands in D is essentially running them on H. Indeed if you start D2 like this, all works (quite unexpectedly at first, but makes sense when you think about it):

docker run ... -v <path-on-H>:<path-on-D2> ...

The next tricky bit is that for many of us, path-on-H will change depending on who runs it. There are many ways to pass data into D so it knows what to use for path-on-H, but probably the easiest is an environment variable. To make the purpose of such var clearer, I start its name with DIND_. Then from H start D like this:

docker run ... -v <path-on-H>:<path-on-D> --env DIND_USER_HOME=$HOME \
    --env DIND_SOMETHING=blabla -v /var/run/docker.sock:/var/run/docker.sock ...

and from D start D2 like this:

docker run ... -v $DIND_USER_HOME:<path-on-D2> ...
Oliver
  • 27,510
  • 9
  • 72
  • 103
25

Another way to go about this is to use either named volumes or data volume containers. This way, the container inside doesn't have to know anything about the host and both Jenkins container and the build container reference the data volume the same way.

I have tried doing something similar to what you are doing, except with an agent rather that using the Jenkins master. The problem was the same in that I couldn't mount the Jenkins workspace in the inner container. What worked for me was using the data volume container approach and the workspace files were visible to both the agent container and the inner container. What I liked about the approach is the both containers reference the data volume in the same way. Mounting directories with an inner container would be tricky as the inner container now needs to know something about the host that its parent container is running on.

I have detailed blog post about my approach here:

http://damnhandy.com/2016/03/06/creating-containerized-build-environments-with-the-jenkins-pipeline-plugin-and-docker-well-almost/

As well as code here:

https://github.com/damnhandy/jenkins-pipeline-docker

In my specific case, not everything is working the way I'd like it to in terms of the Jenkins Pipeline plugin. But it does address the issue of the inner container being able to access the Jenkins workspace directory.

Ryan J. McDonough
  • 1,689
  • 17
  • 20
  • 2
    I cannot believe someone down-voted this answer. This is brilliant and gets right to the heart of the matter. It's a solution that feels like it's using docker's features for the reasons they exist. – neverfox Apr 29 '17 at 00:04
  • Another related blog post about it can be found [here](https://www.develves.net/blogs/asd/2016-05-27-alternative-to-docker-in-docker/) (not mine). – helmesjo Nov 01 '17 at 04:30
  • 3
    this is great, except I need a solution to run docker-compose. Any leads? – Inbar Rose May 06 '18 at 12:32
9

Regarding your use case related to Jenkins, you can simply fake the path by creating a symlink on the host:

ln -s $HOST_JENKINS_DATA_DIRECTORY/jenkins_data /var/jenkins_home
mperrin
  • 994
  • 1
  • 10
  • 19
  • I am curious to use this solution. however I am not sure how to use it. Which host should this be run on? how does it solve the problem? – Inbar Rose May 06 '18 at 12:31
  • 1
    @InbarRose, this command should be run on **host** machine, where docker daemon is running. After that you will have two "directories" `/var/jenkins_home` (with same name and content) on host machine and in Jenkins container, so you can use that directory name to mount data in "docker-in-docker" containers, which are started by Jenkins jobs. – Alexey Prudnikov Jul 10 '18 at 11:43
5

If you are like me and don't want to mess with Jenkins Setup or too lazy to go through all this trouble, here is a simple workaround I did to get this working for me.

Step 1 - Add following variables to the environment section of pipeline

environment {
    ABSOLUTE_WORKSPACE = "/home/ubuntu/volumes/jenkins-data/workspace" 
    JOB_WORKSPACE = "\${PWD##*/}"
}

Step 2 - Run you container with following command Jenkins pipeline as follows.

    steps {
        sh "docker run -v ${ABSOLUTE_WORKSPACE}/${JOB_WORKSPACE}/my/dir/to/mount:/targetPath imageName:tag"
    }

Take note of the double quotes in the above statement, Jenkins will not convert the env variables if the quotes are not formatted properly or single quotes are added instead.


What does each variable signify?

  • ABSOLUTE_WORKSPACE is the path of our Jenkins volume which we had mounted while starting Jenkins Docker Container. In my case, the docker run command was as follows.

    sudo docker run \ -p 80:8080 \ -v /home/ubuntu/volumes/jenkins-data:/var/jenkins_home \ -v /var/run/docker.sock:/var/run/docker.sock \ -d -t jenkinsci/blueocean

Thus the varible ABSOLUTE_WORKSPACE=/home/ubuntu/volumes/jenkins-data + /workspace

  • JOB_WORKSPACE command gives us the current workspace directory where your code's lives. This is also the root dir of your code base. Just followed this answer for reference.

How does this work ?

It is very straight forward, as mentioned in @ZephyrPLUSPLUS ( credits where due ) answer, the source path for our docker container which is being run in Jenkins pipeline is not the path in current container, rather the path taken is host's path. All we are doing here is constructing the path where our Jenkins pipeline is being run. And mounting it to our container. Voila!!

Here's a little illustration to help clarify ... enter image description here

damitj07
  • 2,689
  • 1
  • 21
  • 40
3

This also works via docker-compose and/or named volumes so you don't need to create a data only container, but you still need to have the empty directory on the host.

Host setup

Make host side directories and set permissions to allow Docker containers to access sudo mkdir -p /var/jenkins_home/{workspace,builds,jobs} && sudo chown -R 1000 /var/jenkins_home && sudo chmod -R a+rwx /var/jenkins_home

docker-compose.yml

version: '3.1'
services:
  jenkins:
    build: .
    image: jenkins
    ports:
      - 8080:8080
      - 50000:50000
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - workspace:/var/jenkins_home/workspace/
      # Can also do builds/jobs/etc here and below
  jenkins-lts:
    build:
      context: .
      args:
        versiontag: lts
    image: jenkins:lts
    ports:
      - 8081:8080
      - 50001:50000
volumes:
  workspace:
    driver: local
    driver_opts:
      type: none
      o: bind
      device: /var/jenkins_home/workspace/

When you docker-compose up --build jenkins (you may want to incorporate this into a ready to run example like https://github.com/thbkrkr/jks where the .groovy scripts pre-configure Jenkins to be useful on startup) and then you will be able to have your jobs clone into the $JENKINS_HOME/workspace directory and shouldn't get errors about missing files/etc because the host and container paths will match, and then running further containers from within the Docker-in-Docker should work as well.

Dockerfile (for Jenkins with Docker in Docker)

ARG versiontag=latest
FROM jenkins/jenkins:${versiontag}

ENV JAVA_OPTS="-Djenkins.install.runSetupWizard=false"

COPY jenkins_config/config.xml /usr/share/jenkins/ref/config.xml.override
COPY plugins.txt /usr/share/jenkins/ref/plugins.txt
RUN /usr/local/bin/install-plugins.sh < /usr/share/jenkins/ref/plugins.txt

USER root
RUN curl -L http://get.docker.io | bash && \
    usermod -aG docker jenkins
# Since the above takes a while make any other root changes below this line
# eg `RUN apt update && apt install -y curl`
# drop back to the regular jenkins user - good practice
USER jenkins
EXPOSE 8080
dragon788
  • 3,583
  • 1
  • 40
  • 49
2

A way to work around this issue is to mount a directory (inside your docker container in which you mounted your docker socket) using the exact same path for its destination. Then, when you run a container from within that container, you are able to mount anything within that mount's path into the new container using docker -v.

Take this example:

# Spin up your container from which you will use docker
docker run -v /some/dir:/some/dir -v /var/run/docker.sock:/var/run.docker.sock docker:latest

# Now spin up a container from within this container
docker run -v /some/dir:/usr/src/app $CONTAINER_IMAGE

The folder /some/dir is now mounted across your host, the intermediate container as well as your destination container. Since the mount's path exists on both the host as the "nearly docker-in-docker" container, you can use docker -v as expected.

It's kind of similar to the suggestion of creating a symlink on the host but I found this (at least in my case), a cleaner solution. Just don't forget to cleanup the dir on the host afterwards! ;)

Toon Lamberigts
  • 514
  • 4
  • 4
2

I have same problem in Gitlab CI, I solved this by using docker cp to do something like mount

script:
  - docker run --name ${CONTAINER_NAME} ${API_TEST_IMAGE_NAME}
after_script:
  - docker cp ${CONTAINER_NAME}:/code/newman ./
  - docker rm ${CONTAINER_NAME}
BohanZhang
  • 163
  • 7
1

Based from the description mentioned by @ZephyrPLUSPLUS here is how I managed to solve this:

vagrant@vagrant:~$ hostname
vagrant
vagrant@vagrant:~$ ls -l /home/vagrant/dir-new/
total 4
-rw-rw-r-- 1 vagrant vagrant 10 Jun 19 11:24 file-new
vagrant@vagrant:~$ cat /home/vagrant/dir-new/file-new 
something
vagrant@vagrant:~$ docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock  docker /bin/sh
/ # hostname
3947b1f93e61
/ # ls -l /home/vagrant/dir-new/
ls: /home/vagrant/dir-new/: No such file or directory
/ # docker run -it --rm -v /home/vagrant/dir-new:/magic ubuntu /bin/bash
root@3644bfdac636:/# ls -l /magic
total 4
-rw-rw-r-- 1 1000 1000 10 Jun 19 11:24 file-new
root@3644bfdac636:/# cat /magic/file-new 
something
root@3644bfdac636:/# exit
/ # hostname
3947b1f93e61
/ # vagrant@vagrant:~$ hostname
vagrant
vagrant@vagrant:~$ 

So docker is installed on a Vagrant machine. Lets call it vagrant. The directory you want to mount is in /home/vagrant/dir-new in vagrant. It starts a container, with host 3947b1f93e61. Notice that /home/vagrant/dir-new/ is not mounted for 3947b1f93e61. Next we use the exact location from vagrant, which is /home/vagrant/dir-new as the source of the mount and specify any mount target we want, in this case it is /magic. Also note that /home/vagrant/dir-new does not exist in 3947b1f93e61. This starts another container, 3644bfdac636. Now the contents from /home/vagrant/dir-new in vagrant can be accessed from 3644bfdac636.

I think because docker-in-docker is not a child, but a sibling. and the path you specify must be the parent path and not the sibling's path. So any mount would still refer to the path from vagrant, no matter how deep you do docker-in-docker.

Titi Wangsa bin Damhore
  • 7,101
  • 4
  • 31
  • 36
0

You can solve this passing in an environment variable. Example:

.
├── docker-compose.yml
└── my-volume-dir
    └── test.txt

In docker-compose.yml

version: "3.3"
services:
  test:
    image: "ubuntu:20.04"
    volumes:
      - ${REPO_ROOT-.}/my-volume-dir:/my-volume
    entrypoint: ls /my-volume

To test run

docker run -e REPO_ROOT=${PWD} \
   -v /var/run/docker.sock:/var/run/docker.sock \
   -v ${PWD}:/my-repo \
   -w /my-repo \
   docker/compose \
   docker-compose up test

You should see in the output:

test_1  | test.txt
andrebask
  • 774
  • 1
  • 7
  • 18
0

After a lot of back and forths and different workarounds, I decided to fix this issue for good:

docker-on-docker-shim: A shim that remaps volume mounts so they work when running docker on docker.

Simply install it in your image and call docker as you were used to, without having to tinker with changing your --volume or --mount flags.

Feedback is welcome!

Try it yourself

In the following example, the /usr/local/bin/dind file is only available within the container. See how the shim makes mounting it work:

# This does not work
$ docker run --rm -v /var/run/docker.sock:/var/run/docker.sock docker:latest \
    docker run --rm -v /usr/local/bin/dind:/dind alpine test -f /dind

# A non-zero exit code indicates that it did not work
$ echo $?
1

# This works
$ docker run --rm -v /var/run/docker.sock:/var/run/docker.sock ghcr.io/felipecrs/dond-shim:latest \
    docker run --rm -v /usr/local/bin/dind:/dind alpine test -f /dind

# A zero exit code indicates that it worked
$ echo $?
0
felipecrs
  • 549
  • 4
  • 16