9

Context

A Ruby on Rails application built with Container Builder destined for App Engine. We require bundler to be able to install dependencies from a private git repository using a SSH key.

The SSH Keys come from a secure bucket where they pass through KMS for decryption. Those steps are fine. However, the final step to build the container with Docker fails nothing able to access the SSH key.

I do not have any previous extensive experience with Docker so I assume this is a simple issue.

cloudbuild.yml

steps:
  # Get and prepare Deploy Key
- name: 'gcr.io/cloud-builders/gsutil'
  args: ['cp', 'gs://[PROJECT-BUCKET]/git_id_rsa.enc', '/root/.ssh/git_id_rsa.enc']
  volumes:
  - name: 'ssh-setup'
    path: /root/.ssh
- name: 'gcr.io/cloud-builders/gcloud'
  args:
  - kms
  - decrypt
  - --ciphertext-file=/root/.ssh/git_id_rsa.enc
  - --plaintext-file=/root/.ssh/git_id_rsa
  - --location=global
  - --keyring=[KEYRING]
  - --key=[KEY]
  volumes:
  - name: 'ssh-setup'
    path: /root/.ssh
- name: 'gcr.io/cloud-builders/gcloud'
  entrypoint: /workspace/deploy/git-prepare.sh
  volumes:
  - name: 'ssh-setup'
    path: /root/.ssh
  # ... Omitted steps ...
  # Docker build
- name: 'gcr.io/cloud-builders/docker'
  args: ['build', '-t', 'gcr.io/$PROJECT_ID/[PROJECT-NAME]', '.']
  volumes:
  - name: 'ssh-setup'
    path: /root/.ssh
images: ['gcr.io/$PROJECT_ID/[PROJECT-NAME]']

some identifiers have been omitted

deploy/git-prepare.sh — This performs some shell commands to get the ssh directory refilled with necessary information.

#!/bin/bash

mkdir -p /root/.ssh

touch /root/.ssh/config
ssh-keyscan bitbucket.org >> /root/.ssh/known_hosts

touch /root/.ssh/config
echo -e "\nHost bitbucket.org\n    IdentityFile /root/.ssh/git_id_rsa" >> /root/.ssh/config

if [ -f /root/.ssh/git_id_rsa.enc ]; then
    rm /root/.ssh/git_id_rsa.enc
fi

Dockerfile

# .. OMITTING BOILERPLATE FOR RAILS APP SETUP ...
# Copy the application files.
COPY . /app/

# Copy the SSH keys and config for bundler
VOLUME /root/.ssh

COPY /root/.ssh/known_hosts ~/.ssh/known_hosts
COPY /root/.ssh/config ~/.ssh/config
COPY /root/.ssh/git_id_rsa ~/.ssh/git_id_rsa

Problem:

The build task (run using a build trigger) fails with:

...omitted lines above...   
Step #5: COPY failed: stat /var/lib/docker/tmp/docker-builderxxxxxxxxx/root/.ssh/known_hosts: no such file or directory
Step #5: Step 7/14 : COPY /root/.ssh/known_hosts ~/.ssh/known_hosts

I have a feeling I don't grasp the way that Container Builder and Docker share volumes and data.

3 Answers3

6

Dockerfile COPY can use multiple <src> resources, but the paths of files and directories will be interpreted as relative to the source of the context of the build.

That is your current path where you execute the docker build . command.

In your case, if /root/.ssh is mounted when the Dockerfile executes its step, a simple RUN cp /root/.ssh/... /destination/path would be enough.

However, you cannot mount a volume at docker build time (see moby issue 14080), so check this solution: a multi-stage build can help.

VonC
  • 1,262,500
  • 529
  • 4,410
  • 5,250
  • Fantastic, exactly the details I was lacking. I falsely assumed that 'volumes' in the Container Builder would flow through to the docker build task when that is not the case. I adjusted my approach, moving secrets into a temporary directory in `.` which the Dockerfile sorts out and handles as needed. Cheers! – Alex Eckermann Jan 16 '18 at 11:27
4

Ok, I managed to do what was referenced in the answer and comments above. here's what I did. Note that I had my id_rsa and known_hosts file in the volume /root/.ssh, as the question author posted. I assume he got to his state by following this article: https://cloud.google.com/container-builder/docs/access-private-github-repos

In my cloudbuild.yaml: After cloning my repo, but before the docker build, I added this step:

- name: 'gcr.io/cloud-builders/git' entrypoint: 'bash' args: - '-c' - cp /root/.ssh/{id_rsa,known_hosts} . volumes: - name: 'ssh' path: /root/.ssh

then, in the Dockerfile:

COPY id_rsa /root/.ssh/id_rsa COPY known_hosts /root/.ssh/known_hosts RUN eval $(ssh-agent) && \ echo -e "StrictHostKeyChecking no" >> /etc/ssh/ssh_config && \ ssh-add /root/.ssh/id_rsa

Note, I'm not worried about the keys living in my container, because I'm using multi-stage builds.

Jeff D
  • 2,164
  • 2
  • 24
  • 39
  • Nice addition to my answer. +1 – VonC Mar 20 '18 at 22:16
  • Regarding the ssh key; what I do in the Dockerfile is to remove the keys before finishing the build. Because the ssh key is only needed during the `docker build` process, and not needed for the execution of the container, it should be removed (even if it is a read-only access key). Wether that is a task to `rm` the files in a bash file RUN from the Dockerfile or in a `cloudbuild.yaml` step is up to personal preference. – Alex Eckermann Aug 03 '18 at 04:44
0

I also had to deal with the same problem, instead of copying ssh keys in to Docker files and cleaning up I cloned the repo in to workspace as a cloud build step. Contents in workspace is persisted across builds steps and can be used indocker file during build.

# Clone git repo.
- name: 'gcr.io/cloud-builders/git'
  id: git-clone
  args:
  - clone
  - git@github.com:repo.git
  - workspace/repo-clone
  volumes:
  - name: 'ssh'
  path: /root/.ssh

Then in docker file.

COPY workspace/repo-name build/
newoxo
  • 272
  • 4
  • 8