5

I'm trying to build an image in Docker that requires a few secret files to do things like pulling from a private git repo. I've seen a lot of people with code like this:

ADD id_rsa /root/.ssh/id_rsa
RUN chmod 700 /root/.ssh/id_rsa
RUN touch /root/.ssh/known_hosts
RUN ssh-keyscan github.com >> /root/.ssh/known_hosts
RUN git clone git@github.com:some/repo.git /usr/local/some_folder

Although that works, it means I have to store my private id_rsa with my image, which strikes me as a bad idea. What I'd much rather do is keep my secret files in some cloud storage like s3, and just pass in credentials as environment variables to be able to pull everything else down.

I know that I can pass environment variables in at docker run with the -e switch, but if I need some files at build time (like the id_rsa to perform a git clone), what can I do? Ideally I'd be able to pass environment variables to docker build, but that's not possible (I can't understand why).

So, ideas? What's the canonical/correct thing to do here? I can't be the first person with this issue.

Mark O'Connor
  • 76,015
  • 10
  • 139
  • 185
Eli
  • 36,793
  • 40
  • 144
  • 207
  • possible duplicate of [Docker and securing passwords](http://stackoverflow.com/questions/22651647/docker-and-securing-passwords) – Mark O'Connor Sep 06 '14 at 09:02
  • Why do you need to clone the repos while building the container? Can't you just have those files already present locally? That way you get rid of the problem with secret files and don't need to install git on the image – Abel Muiño Sep 06 '14 at 16:41
  • @AbelMuiño the repo is going to be updated independently of Docker, so on any new image build, we always want the newest version of the repo. Cloning a repo locally and having those files be separately part of the image will cause more work having to update the repo constantly, and defeats the purpose of using Git in the first place. – Eli Sep 06 '14 at 21:16
  • See https://stackoverflow.com/a/51921954/6309 and **`docker build --secret id=mysecret,src=/secret/file`** – VonC Aug 19 '18 at 21:26

1 Answers1

2

I'll start with the easiest part, which I think is a common misconception:

Ideally I'd be able to pass environment variables to docker build, but that's not possible (I can't understand why).

A docker build is meant to be reproducible. Given the same context (the files under the same directory as the Dockerfile) the resulting image is the same. They are also meant to be simple. Both things together explain the absence of environment options or other conditionals.

Now, because the build needs to be reproducible, the execution of each command is cached. If you run the build twice, the git pull will only run the first time.

By your comment, this is not what you intend:

so on any new image build, we always want the newest version of the repo

To trigger a new build you need to either change the context or the Dockerfile.

The canonical way (I'm probably abusing the word, but this is how the automated builds work) is to include the Dockerfile in git.

This allows a simple workflow of git pull ; docker build ... and avoids the problem with storing your git credentials.

Abel Muiño
  • 7,611
  • 3
  • 24
  • 15