6

As part of my buildpipeline, I have a container containing build-tools that is used for multiple projects. One of my project contains a build step to build and publish a container, which is done from within the build-tools container. My docker enabled jenkins-slaves are configured to have user jenkins who is in group docker. I used -v to mount the docker binary and scoket. This can be achieved/reproduced by either:

  • Add the user (jenkins) and group (docker) in the Dockerfile of the build-tools and setting these to the hosts UID and GID
  • Start the container with the -u option, providing UID and GUID (as per documentation, user and group does not need to exist within container).

The issue with the first strategy is that the user and group id are different on the multiple build machines. I could fix this by chaning UID and GID of all build machines to the same values, but wasn't docker meant to run in isolation without having many dependencies on the environment/context? This does not feel like the right solution to me. The second strategy works perfectly fine on commandline, however, there seems to be no way of passing the UID and GID to the agent command in Jenkinsfile. the args parameter does not support scripts or variables, like $(id -u).

I expected not to be the first facing this issue, however, I was not able to find a solution to this by myself, search machines and stack overflow. Should I go with 'prepped' build slaves or is there a way to get the second strategy working?

.

-edit- I understand the options to run the container as root, and switch after starting (e.g. using entrypoint). However, that would require my Jenkins slave to be connected as root, something that is unacceptable for me. Another found alternative is the chmod777 of all resources, which fully defies the security aspect of not running a Jenkins slave as root user. I would prefer to use the -u option to containers, but I can't find a way to determine the UID and GID on a jenkins slave before starting up the docker agent (docker run command) from within the Jenkinsfile.

Jordi
  • 1,357
  • 12
  • 20
  • You might be interested in this SO answer by @BMitch: [How to generate files in a docker container for having the same owner as the host's user](https://stackoverflow.com/a/47665682/9164010) – ErikMD Oct 24 '19 at 17:51
  • 1
    Possible duplicate of [How to generate files in a docker container for having the same owner as the host's user](https://stackoverflow.com/questions/47644556/how-to-generate-files-in-a-docker-container-for-having-the-same-owner-as-the-hos) – ErikMD Oct 24 '19 at 17:52
  • Thank you for your comment. That would require to have Jenkins connect to the node/slave as root user before running a docker container. For security reasons, ssh root logins are forbidden. I am also very hesitant to give the Jenkins user root access (like sudoers file) – Jordi Oct 24 '19 at 19:45
  • 1
    you say you are hesitant to give the Jenkins user root access, however if your `jenkins` user is in the `docker` group, it has already (de facto) root privileges (see e.g. [this blog article](https://www.projectatomic.io/blog/2015/08/why-we-dont-let-non-root-users-run-docker-in-centos-fedora-or-rhel/)) – ErikMD Oct 24 '19 at 20:54
  • BTW I've seen this other SO answer (also by @BMitch :) where the main alternative strategies (related to your use case) are recapitulated: [How to have host and container read/write the same files with Docker?](https://stackoverflow.com/a/56060521/9164010) – ErikMD Oct 24 '19 at 20:57
  • I understand all alternatives. Some are a poor mans choice in my view (like chmod 777 all) or running the jenkins slave as user root. Others are very nice for doccker, but I couldn't find a solution to get that to work in Jenkinsfile. For example providing the right -u parameters. Building the docker container with a specified UID and GID on one slave will not work on other slaves as these will have different values. It seems that I'm stuck with the option to align all UID and GID of jenkins user and docker group for all slaves – Jordi Oct 25 '19 at 10:37
  • Fair comment on the blog article. My reasoning was that I don't want SSH root logins as it is typically used for atacks while it is internet facing. The system is not used for anything other than Jenkins triggered docker containers (i.e. build agents). I am not that concerned about the fact that somebody runs docker commands on the root system. The only entry for that would be the jenkins jobs, which is secured by it's own security/authorization mechanism. What is your view on that? – Jordi Oct 25 '19 at 10:47

2 Answers2

2

A simple solution

Actually, I believe your first idea for a solution can be achieved easily with docker and without the need to run any Jenkins slave as root.

Consider this command:

docker run --rm -it -v /etc/passwd:/etc/passwd:ro -v /etc/shadow:/etc/shadow:ro -v /etc/group:/etc/group:ro debian:10 /bin/su linux-fan -c /bin/bash

This creates a new container and maps the users from the host into the container. Then, inside that container it drops immediately to user linux-fan which (only) needs to be defined on the outward system.

Whether you run this command as root or as any user in the docker group does not make a difference (note that the comments are very right about docker group = root access!)

Also, mapping things inside the container this way (already when doing so with the docker socket...) is really giving up most of the isolation that a container provides. It would thus be sensible to consider running whichever command requires access to the host's Docker daemon to run directly on the host or in a less-isolated environment like a chroot? Of course, the simplicity of invoking Docker may still outweigh the lack of isolation here.

An alternative solution

A solution without host-access could easily work around this: Using docker-in-docker i.e. running a new docker daemon inside the build container instead of accessing the host isolates them from each other such that the host's user and group IDs do not matter.

linux-fan
  • 353
  • 1
  • 8
0

We use the following solution successfully for over 6 months:

  • run docker in docker: use the docker command from your container, and not from the host to avoid dependency hell where you have to mount half of your host system.
  • make sure that your default user inside your bulid container can run docker (add jenkins user to docker group)
  • just mount the docker.sock with permissions for everyone: -v /var/run/docker.sock:/var/run/docker.sock:rw
Chris Maes
  • 35,025
  • 12
  • 111
  • 136