143

Is there any elegant way to add SSL certificates to images that have come from docker pull?.

I'm looking for a simple and reproducible way of adding a file into /etc/ssl/certs and run update-ca-certificates. (This should cover ubuntu and Debian images).

I'm using docker on CoreOS, and the CoreOS machine trusts the needed SSL certificates, but the docker containers obviously only have the default.

I've tried using docker run --entrypoint=/bin/bash to then add the cert and run update-ca-certificates, but this seems to permanently override the entry point.

I'm also wondering now, would it be more elegant to just mount /etc/ssl/certs on the container from the host machines copy? Doing this would implicitly allow the containers to trust the same things as the host.

I'm at work with an annoying proxy that resigns everything :(. Which breaks SSL and makes containers kind-of strange to work with.

Lii
  • 11,553
  • 8
  • 64
  • 88
Beau Trepp
  • 2,610
  • 4
  • 22
  • 30
  • 3
    Have you thought about creating a Dockerfile that would use your image, add the file and run update-ca-certificates? or is that not what you are looking for? – Céline Aussourd Sep 26 '14 at 14:04
  • 2
    I have done that for some images. It's not a bad solution. Does require you to build on all images with your own though. – Beau Trepp Oct 06 '14 at 00:07

7 Answers7

110

Mount the certs onto the Docker container using -v:

docker run -v /host/path/to/certs:/container/path/to/certs -d IMAGE_ID "update-ca-certificates"

Note: the -v flag is used to bind/specify volumes to the docker container.

a3y3
  • 1,025
  • 2
  • 10
  • 34
cdrev
  • 5,750
  • 4
  • 20
  • 27
  • 7
    That's pretty nifty. If the container uses the same style of ssl_certs you wouldn't even need the update-ca-certificates line, the host would have already done it :). – Beau Trepp Oct 06 '14 at 00:09
  • 10
    and if we are building in the cloud? – Ewoks Jun 10 '19 at 16:30
  • 3
    How does this play nicely with the container images `CMD` or `ENTRYPOINT`. Isn't the "update-ca-certificates" either interpreted as an additional argument or replacing the actual command defined in the Dockerfile? – Joerg Jan 26 '22 at 08:23
  • 2
    what if '/host/path/to/certs' is a symlink? and what is the '/container/path/to/certs' if WORKDIR is '/usr/src/app' ? – user2401543 Jun 07 '22 at 16:25
  • What is the certs path for container? Does it depend on the underlying OS of the base image? Also, as @Joerg mentioned, does `update ca-certificates` not override the `CMD` from dockerfile? – dragonfly02 Nov 22 '22 at 08:59
  • This "accepted" answer doesn't actually work for the question asked, unless your container entrypoint is bash and you don't need to pass any commands to the container (a minority of cases). It's mounting the extra root CA folder from the host to guest, then running the CA update command, but that state isn't saved for the next run so it has to be called every time. – mtalexan Jan 09 '23 at 14:24
37

I am trying to do something similar to this. As commented above, I think you would want to build a new image with a custom Dockerfile (using the image you pulled as a base image), ADD your certificate, then RUN update-ca-certificates. This way you will have a consistent state each time you start a container from this new image.

# Dockerfile
FROM some-base-image:0.1
ADD you_certificate.crt:/container/cert/path
RUN update-ca-certificates

Let's say a docker build against that Dockerfile produced IMAGE_ID. On the next docker run -d [any other options] IMAGE_ID, the container started by that command will have your certificate info. Simple and reproducible.

shudgston
  • 403
  • 4
  • 5
  • Usually I would prefer the docker run -v solution mentioned in other answers. But your solution also works if you need certificates during docker build. Thanks! – bastian Aug 05 '16 at 07:09
  • 22
    I would be wary of putting certificates into any public container. Someone else could pull your container and extract your private certs. – skibum55 Sep 01 '16 at 18:25
  • 7
    While that is a very good point, the solution above does not make anything public. This is meant to add your own certificates into an image that is built locally and then used privately. You _could_ then push the resulting image to a public repository, but that would be a bad idea as you said. – shudgston Sep 26 '16 at 13:58
  • Of course, that image could also be pushed to a private registry too, which is not as bad. – Seer Mar 14 '17 at 12:17
  • 1
    At best this is _okay_. If you want better than okay this sort of configuration needs to be passed in at runtime. In addition to implications about how you will need to store and safeguard the image the same way you would a secret, consider that the certs will eventually expire. If the certs are baked into the image you're effectively setting an expiration of the image which few will be aware of. This is bad. Speaking from experience. – ztech Apr 19 '18 at 15:48
  • 12
    Since when certificates are secret? – techraf Jul 21 '18 at 17:26
  • 9
    Since your server needs a private key to match the certificate it is publishing. – John Rix Apr 03 '19 at 23:38
  • You want to use COPY, not ADD. – jonathan Jun 18 '20 at 14:32
  • @techraf Consider the case of a containerized web app that will connect to RabbitMQ. We want to have server certs on RabbitMQ and client certs in the web app, which it uses to prove "I'm allowed to connect to RabbitMQ", like a username / password but stronger. You may not want to build the client cert into the web app image, as it's meant to be secret. – Nathan Long Oct 13 '20 at 18:20
  • 2
    I wonder how many people got their private cert leaked because of your comment. It's a terrible solution in every way. The least you could do is add a disclaimer to your answer that warns people against making their docker image public if they use your "solution". – MyUsername112358 Feb 11 '22 at 16:00
  • 2
    @MyUsername112358 this is talking about public keys, not private keys. The cert can't do anything dangerous without the private key. – Minecraftchest1 Dec 28 '22 at 02:55
  • 1
    The solution proposed is basically "fork the container image and add your (public) cert to it". That is certainly a solution, but a maintainability nightmare, especially if you're running something that needs to pull `latest` or a similar moving image tag. – mtalexan Jan 09 '23 at 14:27
34

As was suggested in a comment above, if the certificate store on the host is compatible with the guest, you can just mount it directly.

On a Debian host (and container), I've successfully done:

docker run -v /etc/ssl/certs:/etc/ssl/certs:ro ...
Community
  • 1
  • 1
Jonathon Reinhart
  • 132,704
  • 33
  • 254
  • 328
  • 1
    So what to do when building Docker image on the build server? :/ – Ewoks Jun 10 '19 at 16:29
  • @Ewoks You could host your certs on some private DNS and load them inside your helm charts and you can automate creating the volume on your cluster. – Bassam Gamal Sep 12 '19 at 10:48
  • Based on the question asked, they aren't compatible. Ubuntu/Debian uses legacy certificate layout, while Fedora CoreOS (like Arch) uses a modern certificate layout that includes additional files. – mtalexan Jan 09 '23 at 14:18
7

You can use relative path to mount the volume to container:

docker run -v `pwd`/certs:/container/path/to/certs ...

Note the back tick on the pwd which give you the present working directory. It assumes you have the certs folder in current directory that the docker run is executed. Kinda great for local development and keep the certs folder visible to your project.

alltej
  • 6,787
  • 10
  • 46
  • 87
  • How do I know which local certs I need to mount to the container? I have so many certs locally. Also I am on macos and the container is in linux; how to make the container run `update ca-certificates` without overriding the `CMD` in Dockerfile? – dragonfly02 Nov 22 '22 at 06:43
  • The base problem behind the question is that they don't have the certs to mount in the first place. – mtalexan Jan 09 '23 at 14:19
4

I've written a script that wraps docker and sets up the host's SSL certificates in the guest.

The bonus is that you don't need to rebuild any containers - it should Just Work.

It's called docker, so you could either copy it somewhere on your $PATH higher than docker, or rename and put elsewhere.

Do let me know via Github if you have any issues with it!

Ari Fordsham
  • 2,437
  • 7
  • 28
  • Your script is just a wrapper around the already accepted answer. On StackOverflow you should provide a description of what you're doing in your answer and then can link to the source that has the implementation, and should avoid duplication of answers. – mtalexan Nov 30 '22 at 19:07
  • As far as the user is concerned, this is a significantly easier way to solve the problem - just drop in the script. I don't think SO answers are required to describe the implementation method of tools used in answers. – Ari Fordsham Dec 01 '22 at 11:59
  • Answers on StackOverflow are expected to explain how/why this solves the problem. Also your script doesn't solve the problem. With an Ubuntu/Debian host, the host system's root CA trust store is not compatible with the target Fedora CoreOS container image's trust store. – mtalexan Jan 09 '23 at 14:21
  • Do you hava a source for that? – Ari Fordsham Jan 10 '23 at 15:46
2

This won't directly answer your question but this is how I solved the same issue.

I was running golang:1.16.4-buster and nothing I tried with certificates worked. I switched to golang:1.17.8-alpine3.15 and it worked from the start without having to try to load any certificates. Plus, the bonus of a smaller distro.

Dharman
  • 30,962
  • 25
  • 85
  • 135
Eli Fry
  • 90
  • 5
  • Almost certainly the specific image was built to pull in environment variables when you `docker run` it and mount a CA-related variable as a path for additional CAs. That implies the original container image was created with extra certificates in mind, and you incidentally had one of them set properly. – mtalexan Nov 30 '22 at 19:09
2

There's isn't really a great way to solve this when you're talking about CoreOS (Fedora) and an Ubuntu/Debian guest. Fedora uses the modern standard for organizing the "trust-anchors", while Ubuntu/Debian still uses the older style. The two aren't directly compatible.

Having spent an excessively long time trying to solve the reverse of this problem (Fedora on Ubuntu), your options are:

  1. Get the container image to add first-class support for custom certificates to be added via environment variable (common on well crafted containers, but not going to happen for a direct Ubuntu distro image).
  2. Find a way to run a similar host system (usually not a viable option) and mount the host trust-anchors over the guest ones.
  3. Spin your own version of the image that adds the certs or support for specifying them (usually not maintainable to manage long-running fork)
  4. Wrap the ENTRYPOINT with a script that adds and runs the CA addition/installation from an optional extra host-mount (very problematic, see below)
  5. Run a/the container once with modified arguments to generate a copy of an updated trust-store in a host-mount, then host-mount that over subsequent runs of the container (do this one).

The very best option is usually to try to get the container image maintainer (or submit a PR yourself) to add support for loading extra CA certificates from an environment variable since this is a very common use case among corporate users and self-hosters. However this usually adds excessive overhead for one-shot containers that's unacceptable, and the image maintainer may have other good reasons not to do this. It also doesn't solve the problem for you in the mean time.

Changing your host and "forking" the image to spin your own also aren't great options, usually they're non-starters for deployment or maintainability reasons.

Wrapping the ENTRYPOINT is basically the equivalent of doing an ad-hoc version of modifying the container to support custom certificates, but purely from the outside of the image. It has all the same potential reasons for not doing it, and the downsides that you're doing it from outside the container, but has the benefit that you don't need to wait on an image update to do it. I would not recommend this option usually. This solution is basically to write a script you host-mount into the container that will do the CA setup, and then run whatever the ENTRYPOINT and CMDs are. However there are some major gotchas here. First, you need to customize it to specific container you're running so it runs the same entrypoint. With some scripting this can probably be determined, but you need to watch out for well-crafted containers that have an init system to handle the pid 1 problem (https://github.com/Yelp/dumb-init#why-you-need-an-init-system tl;dr: catching signals like interrupts and not losing system resources when force stopping a container requires a pid 1 init process to manage it). There are a handful of different init systems out there, and you can't wrap an init system. Additionally, if you're using Docker, you can't override entrypoints with multiple commands from the command-line. Containers with init systems like dumb-init take an argument to the command actually being run, so the entrypoint is a list (['/usr/bin/dumb-init', '/usr/bin/my-command']). Docker only allows multi-command entrypoints to be specified via the API, not via the command-line, so there's no way to keep the dumb-init command and supply your own script for the second argument.

The "Best" Solution: While long running containers would strongly benefit from option #1 above, your best bet for one-shot containers and for an immediate solution is to generate a host-mount of the guest trust-anchors.
The best way is to generate a host-stored copy of what the updated container trust-anchors should look like, and mount that over the top of your container trust-store. The most compatible way is to do this using the target container image itself, but with an override for the entrypoint, host-mounting a "cache" folder for the trust-anchors in the project workspace associated with the container. However that might not work in cloud and CI situations. An alternative option is to keep a separate container volume around that uses each of the two major trust-anchor styles (modern, e.g. Fedora, Arch, etc, and legacy, e.g. Debian, Ubuntu, etc) and is separately updated semi-regularly from a generic container image of the appropriate type. The resulting container volumes then merely becomes a volume dependency where the proper one is selected based on the target container image type. The gist of how to generate one of these is to host-mount a script that adds the root CAs to the appropriate folder (FYI, legacy trust-anchors will search the root CA folders recursively, but modern will not), runs the trust-anchor update command, and then copies the resulting trust-anchor folders to a host-mount.


Update:

If it's still relevant, most Ubuntu container base images use cloud-init internally (now), which has support for a lot of common things, including adding custom root CAs to the container image, e.g. they already support option 1.
https://cloudinit.readthedocs.io/en/latest/topics/examples.html#configure-an-instances-trusted-ca-certificates

I believe you can add a file mount to /etc/cloud/cloud.cfg.d/ that has YAML like in the example link and it will get picked up during container boot. You could easily generate that YAML programatically based on the the extra root CA certificates you wanted.


EDIT1: Fixed: I reversed which was host and guest from the original question. Also added update about cloud-init. EDIT2: Fixed style typo

mtalexan
  • 677
  • 1
  • 7
  • 17