3

I have a Jenkins setup with a master and multiple worker/slave nodes. The workers are docker containers running on VMs. The containers themselves have no docker daemon running (nor installed). They mount /var/run/docker.sock from the host for that.

When I build the image it fails to include some CA certificates I added to the build as a binding.

My application setup (Spring Boot + Gradle) is the following:

user@nb [~/dev/project]
-> % tree ./ca-certificates
./ca-certificates
└── [drwxr-xr-x 4.0K]  binding
    ├── [-rw-r--r-- XXXK]  newRootCA.pem
    ├── [-rw-r--r-- XXXK]  newInterCA.pem
    └── [-rw-r--r--   16]  type

1 directory, 4 files
user@nb [~/dev/project]
-> % cat ./ca-certificates/binding/type 
ca-certificates
user@nb [~/dev/project]
-> % 

I mount this folder as a binding in my gradle task (coming from the Spring Boot Gradle plugin)

tasks.bootBuildImage {
    enabled = project.hasProperty("withDocker")
    ...
    binding("${projectDir}/ca-certificates/binding:/platform/bindings/ca-certificates")
    ...
}

On my local machine this works as intended.

user@nb [~/dev/project]
-> % ./gradlew build -PwithDocker
...
 > Running creator
    [creator]     ===> DETECTING
    [creator]     5 of 18 buildpacks participating
    [creator]     paketo-buildpacks/ca-certificates   2.4.2
    [creator]     paketo-buildpacks/bellsoft-liberica 8.8.0
    [creator]     paketo-buildpacks/executable-jar    5.3.1
    [creator]     paketo-buildpacks/dist-zip          4.3.0
    [creator]     paketo-buildpacks/spring-boot       4.7.0
...
    [creator]     Paketo CA Certificates Buildpack 2.4.2
    [creator]       https://github.com/paketo-buildpacks/ca-certificates
    [creator]       Launch Helper: Reusing cached layer
    [creator]       CA Certificates: Contributing to layer
    [creator]         Added 2 additional CA certificate(s) to system truststore
    [creator]         Writing env.build/SSL_CERT_DIR.append
    [creator]         Writing env.build/SSL_CERT_DIR.delim
    [creator]         Writing env.build/SSL_CERT_FILE.default
...

When running the build on Jenkins the output with the additional CA certificate(s) is missing. And the resulting container does not contain them.

After two days of searching now I found out, that it's because of the setup of the Jenkins slave. When the build runs, the docker daemon (which is running on the host system) does not know/have access to the project directory and hence cannot mount the folder with the pem files to the build container. It is not throwing any errors. It creates the directory /home/jenkins/workspaces/project/ca-certificates/binding on the host system though (and then mounts the empty folder to the build container, I guess).

I think this is a general issue with volumes and docker containers in environments, where the docker daemon has no access to the filesystem of the client. I found the issue together with a colleague, that currently tests having the docker daemon installed in Minikube as an alternative to Docker Desktop on Mac and Windows.

I can only think of two solutions to this problem right now: Installing the docker daemon in all my Jenkins slaves or building and using my own builder image, that already includes the certificates.

Both solutions have their downsides. When implementing the first one I need to take care of credentials for my private registry in all my slaves. The latter would require to regularly build new releases to get updates of the builder image. Also this would then only fix this specific case, where I need those three specific files in the container.

Do you have any other idea?

Thanks in advance!

Max N.
  • 993
  • 1
  • 12
  • 31
  • I'm not sure this is exactly the same, but perhaps a possibility. For our Concourse CI we run "Docker in Docker" like you've said. Rather than set up access to private registries on the internal Docker, we have Concourse pull those images & pass them into the job as tar archives, then we `docker load` the tar archive into the nested Docker daemon & do our work. It's a little more setup, but this also helps because it allows Concourse to potentially cache the images, which wouldn't happen in the nested Docker without saving a lot of state. – Daniel Mikusa Oct 20 '21 at 01:07
  • I think that would also solve your volume issues because the nested Docker would have access to client files. In regards to your Desktop/Minikube scenario, the trick there is to use your VM tools to mount your user directory into the VM. I'm not sure if Minikube can do this, but with Virtualbox I "share" my user directory into the VM & docker on that VM can then access my local files (just ensure the full path is the same in both places). It makes volume mounts work as you would expect. The colima tool does this as well, if you want something to manage your Docker VM for you. – Daniel Mikusa Oct 20 '21 at 01:10
  • How do you handle all the different images then? Do you somehow have a pre-build step, that you need to explicitly define the images to download and copy to your workers? And how do you handle pushing images to the private registry then? – Max N. Oct 21 '21 at 08:12
  • In Concourse you define a resource for each image you need, Concourse watches that image for updates and will provide the latest version of the image given the constraints you define as a tar archive (I believe what it's doing is like a `docker save`) to the job. When the job runs, it's in the container with Docker support & you just `docker load` the tar archive and it's available for use. It's definitely tedious. – Daniel Mikusa Oct 21 '21 at 20:05
  • When you use the `pack` cli, it is just designed to have the Docker daemon present. You can in theory call the `lifecycle` directly. `pack` interacts with the user, sets up a container, and runs the lifecycle in the container. Since you already have a container setup, you could try using the lifecycle directly. That is what drives all of the buildpacks through their phases and exports the image. You would need to publish the image directly to a registry, since you have no Docker daemon locally. I haven't done this though, probably a lot of details to work out. – Daniel Mikusa Oct 21 '21 at 20:09
  • I have done exactly what you are referring to as `using the lifecycle directly`: https://stackoverflow.com/a/69569785/4964553 We needed it in a GitLab CI setup with Kubernetes executors/runners that have no access to a Docker (tcp) socket or Docker-in-Docker for security reasons. We even solved to login to a private Docker registry without docker CLI installed. Will also release a blog post about that topic next week. – jonashackt Oct 22 '21 at 11:17
  • Here’s a short summary: Login into your private Docker registry works [as described in this answer](https://stackoverflow.com/a/46422186/4964553): Create the directory `~/.docker`with `mkdir ~/.docker` and then create `~/.docker/config.json` with `echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_JOB_TOKEN\"}}}" >> ~/.docker/config.json` (with your Jenkins equivalent variables). Then use the lifecylce directly with `/cnb/lifecycle/creator -app=. $REGISTRY_GROUP_PROJECT:latest` (we used `paketobuildpacks/builder` as the base image for the job). – jonashackt Oct 22 '21 at 11:20
  • To answer `And how do you handle pushing images to the private registry then?`: [As stated in the docs](https://github.com/buildpacks/spec/blob/main/platform.md#registry-authentication) `"If CNB_REGISTRY_AUTH is unset and a docker config.json file is present, the lifecycle SHOULD use the contents of this file to authenticate with any matching registry."`. And as the lifecycle will also publish the image automatically, your image will be pushed into your private registry. – jonashackt Oct 22 '21 at 11:30
  • 1
    I'm really interested to read your blog post :) Unfortunately I cannot try it right now. The Gradle plugin does not support overwriting the build command inside the container. I solved my specific issue by adding the certificates to `/bindings/ca-certificates` in the container (from a K8s secret) and set `SERVICE_BINDING_ROOT=/bindings`. This way they are added to the system store at startup. – Max N. Oct 25 '21 at 09:55
  • @MaxN. could you kindly share more details about how you solved as I am faced with the exact same issue. works locally but not on gitlab. thanks in advance – theo Sep 08 '22 at 07:50
  • Unfortunately I still haven't fixed it yet. we only try to copy certificates in, we currently just mount them on startup. We're currently migrating our Jenkins instance to a new cluster. There we'll use a dind (Docker-in-Docker) container for our builds, so we'll not use the hosts daemon anymore. This will fix it. Until then we'd need to stick with our current setup. Sorry, that I cannot be of more help. – Max N. Sep 09 '22 at 08:04

0 Answers0