7

In my Manage Jenkins > Global Tool Configuration, i have already configured a tool called "docker" as follows:

name:                   docker
install automatically:  CHECKED
docker version:         latest

Then all I have in my jenkinsfile is the following and nothing else:

node {
    DOCKER_HOME = tool "docker"
    sh """
        echo $DOCKER_HOME
        ls $DOCKER_HOME/bin/
        $DOCKER_HOME/bin/docker images
        $DOCKER_HOME/bin/docker ps -a
    """
}

I get an error like this "Cannot connect to the Docker daemon. Is the docker daemon running on this host?".

Following is the full console log:

Started by user Syed Rakib Al Hasan
[Pipeline] node
Running on master in /var/jenkins_home/workspace/helloDocker
[Pipeline] {
[Pipeline] tool
[Pipeline] sh
[helloDocker] Running shell script
+ echo /var/jenkins_home/tools/org.jenkinsci.plugins.docker.commons.tools.DockerTool/docker
/var/jenkins_home/tools/org.jenkinsci.plugins.docker.commons.tools.DockerTool/docker
+ ls /var/jenkins_home/tools/org.jenkinsci.plugins.docker.commons.tools.DockerTool/docker/bin/
docker
+ /var/jenkins_home/tools/org.jenkinsci.plugins.docker.commons.tools.DockerTool/docker/bin/docker images
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 1
Finished: FAILURE

How do i ensure that the docker daemon/service is running/started before my pipeline reaches the line to run docker commands.

Is there any other native docker-build-step plugin way to achieve what I am doing here? Like docker ps -a or docker images or docker build -t?

Some assumptions:

Let's say my chosen node do not already have docker/docker-engine installed/running in my host machine. That's the purpose of the tool command to automatically install docker in the node if it is not already there.

Rakib
  • 12,376
  • 16
  • 77
  • 113
  • You need to not only have docker installed, you also need the docker daemon to be running. You can either configure your hosts by other means, such as ansible, systemd-unit files, AMIs, whatever works in your domain. Or you can make ensuring that the docker service is up and running one of the first (blocking) steps of your pipeline. – ffledgling Jan 19 '17 at 12:45
  • My jenkins itself is inside a docker container - the official jenkins docker container. Running `sudo service docker start` inside a container does not work. So i do not have that option. – Rakib Jan 19 '17 at 12:47
  • 2
    if your jenkins is inside a docker container then it still can't magically run docker commands, it needs access to the running docker daemon. There's no docker daemon running wherever you're running your commands. You can either expose your host docker daemon inside the docker container by forwarding the daemon socket or install the daemon inside the container and use that. It depends on your setup. Please take some time to read up on how docker works. – ffledgling Jan 19 '17 at 12:49
  • I think the cleanest way would be to provision an additional host with docker and add it as a slave node for your jenkins. Those docker-in-docker approaches are fairly hacky... That way you can still run your jenkins master as a container and use the slave node for docker builds. – fishi0x01 Jan 19 '17 at 12:51
  • @ffledgling are you referring to the approach mentioned by `svendowideit` in this page https://forums.docker.com/t/using-docker-in-a-dockerized-jenkins-container/322/3 ? – Rakib Jan 19 '17 at 12:58
  • This reminds me of something I did with VirtualBox. I needed to use Windows to build something and needed tight control over the environment it was built in. So I just cloned a Windows VM in which the environment had been carefully constructed and then had Jenkins start it and instruct it to build the thing, then copy out the build products to the host. – Omnifarious Jan 19 '17 at 16:27

1 Answers1

2

This Jenkins plugin is for the docker client; I'd solve (work around) by:

  • setting up jenkins slaves where docker daemon is reachable, add a label
  • setting up a housekeeping job which will fail if docker daemon was not reachable (so we can notify the infra team without having the QA to figure out and escalate the problem)
  • assign jobs which assumes the docker daemon to be reachable to this label

I hope it helps, and I'm curious if any of you have a better solution!

GyulaWeber
  • 64
  • 4