8

We have a Google Artifact Registry for our Python packages. Authentication works like this. Locally it works well.

However, how do I pass credentials to Docker build when I want to build a Docker image that needs to install a package from our private registry?

I'd like to keep the Dockerfile the same when building with a user account or with a service account.

This works, but I'm not sure it's best practice:

FROM python:3.9

RUN pip install keyring keyrings.google-artifactregistry-auth

COPY requirements.txt .

RUN --mount=type=secret,id=creds,target=/root/.config/gcloud/application_default_credentials.json \
    pip install -r requirements.txt

Then build with:

docker build --secret="id=creds,src=$HOME/.config/gcloud/application_default_credentials.json" .
Peter
  • 1,658
  • 17
  • 23

2 Answers2

7

Using keyring is great when working locally, but in my opinion it's not the best solution for a Dockerfile. This is because your only options are to mount volumes at build time (which I feel is messy) or to copy your credentials into the Dockerfile (which I feel is insecure).

Instead, I got this working by doing the following in Dockerfile:

FROM python:3.10

ARG AUTHED_ARTIFACT_REG_URL
COPY ./requirements.txt /requirements.txt

RUN pip install --extra-index-url ${AUTHED_ARTIFACT_REG_URL} -r /requirements.txt

Then, to build your Dockerfile you can run:

docker build --build-arg AUTHED_ARTIFACT_REG_URL=https://oauth2accesstoken:$(gcloud auth print-access-token)@url-for-artifact-registry

Although it doesn't seem to be in the official docs for Artifact Registry, this works as an alternative to using keychain. Note that the token generated by gcloud auth print-access-token is valid for 1 hour.

LondonAppDev
  • 8,501
  • 8
  • 60
  • 87
  • I like this approach, thanks! – Peter May 20 '22 at 08:22
  • this answer makes sense, but I'm concerned about the credentials being stored in the built image – logoff Oct 17 '22 at 12:00
  • @logoff me too, that's why I used build args which do not persist in the container (as per docs: https://docs.docker.com/engine/reference/commandline/build/#set-build-time-variables---build-arg). Did I miss something? Is it cached by `pip` somewhere? – LondonAppDev Oct 18 '22 at 12:48
  • I'm not sure about the internals of `pip`, so I have the same doubt – logoff Oct 19 '22 at 07:14
  • 1
    @LondonAppDev The `docker build ...` command is logged in the container image and considered bad practice. – Robino Feb 27 '23 at 18:16
  • @Robino yes, but what's your proposed solution to the problem? Is it better to bake the arg value into the image instead? This was the best solution I could come up with, but if there were any better ones inline with best practice I'd love to hear. – LondonAppDev Feb 28 '23 at 07:54
  • @LondonAppDev I feel you that mounting a temporary secret to a something like `/run/secrets/keyring` feels messy in a traditional computing sense, but mounting volumes over directories (be it for caching builds artefacts to improve build times, or temporary secrets removing layers for improved security) is really an asset of the containerised world. IMO – Robino Feb 28 '23 at 22:04
0

The most secure, and current best practice, for accessing private resources during the docker build ... is using docker secrets, as you have in the OP.

You probably won't want to use the keyring again after this, so you could consider uninstalling it and following up with a multistage build for further security and reduced image size.

There are exceptions, for example if this is just a base image and application images will reuse the keyring, with another RUN --mount command. But even in this case, a final multistage build could be of help for the same reasons.

The result will be a clean python docker image, with just the required packages installed, on a single layer and with nothing else that is not required for production, such as the keyring packages.

Robino
  • 4,530
  • 3
  • 37
  • 40