1

I'm trying to deploy a Docker image to AWS ECR using Bitbucket Pipeline. In the requirements.txt file, I have a Python package that originates from a private Bitbucket repo within my project.

Unfortunately, my Bitbucket pipeline builds keep failing. I think I'm missing some essential steps in authentication or pip install but I can't seem to find the correct documentation for this use case.

Following this Bitbucket community post, I generated a SSH key in my pipeline project and added it to Access Keys in the package repo. Then I followed this post and structured my files as such:

  • Dockerfile
# syntax = docker/dockerfile:1.2
FROM python:3.9-slim
WORKDIR /src
# Install git to download private repo
RUN apt-get update && apt-get install -y git
# Add Bitbucket SSH key to install private repo
ARG SSH_PRIVATE_KEY
RUN mkdir ~/.ssh/
RUN echo "${SSH_PRIVATE_KEY}" > ~.ssh/id_rsa
RUN chmod 600  ~/.ssh/id_rsa
RUN touch ~/.ssh/known_hosts
RUN ssh-keyscan bitbucket.org >> ~/.ssh/known_hosts
RUN eval $(ssh-agent -s)
RUN ssh-add ~/.ssh/id_rsa
# Install Python dependencies
RUN pip install --upgrade pip setuptools
COPY requirements.txt requirements.txt
# requirements.txt also includes private repo package
RUN pip install --no-cache-dir -r requirements.txt
# Copy code into `src` folder
COPY src/ /src
# Set up environment variables & secrets
RUN --mount=type=secret,id=keys cat /run/secrets/keys \ 
  && python -m configs.parser
ENTRYPOINT ["python", "main.py"]
  • bitbucket-pipelines.yml
image: atlassian/default-image:2

pipelines:
  branches:
    master:
      - step:
          name: Build and AWS Setup
          services:
            - docker
          script:
            # Export repo variables to .env file
            - export ENV_PATH=src/configs/.env
            - echo AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID >> $ENV_PATH
            - echo AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY >> $ENV_PATH
            - export SSH_PRIVATE_KEY=`cat /opt/atlassian/pipelines/agent/data/id_rsa`
            - export TIMESTAMP="$(date +%Y%m%d%H%M%S)"
            # Build docker image with secrets mounted
            - export DOCKER_BUILDKIT=1
            - docker build --build-arg SSH_PRIVATE_KEY --secret id=keys,src=$ENV_PATH -t $AWS_ECR_REPO .
            # use pipe to push the image to AWS ECR
            - pipe: atlassian/aws-ecr-push-image:1.3.0
              variables:
                AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
                AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
                AWS_DEFAULT_REGION: us-east-1
                IMAGE_NAME: $AWS_ECR_REPO
                TAGS: "latest $TIMESTAMP $BITBUCKET_BUILD_NUMBER"

My pipeline run fails at Step 5 of docker build with the error: executor failed running [/bin/sh -c echo "${SSH_PRIVATE_KEY}" > ~.ssh/id_rsa]: exit code: 2

Any help with this would be greatly appreciated!

E. Pan
  • 61
  • 1
  • 6
  • Related https://stackoverflow.com/a/66301568/11715259 and https://stackoverflow.com/q/69798493/11715259 – N1ngu Jun 08 '23 at 15:43

1 Answers1

0

You should NOT pass private RSA keys as arguments nor dump them in files inside the docker image you are building. This way that private key would end exposed in your image layers.

You should mount the build agent ssh key with

RUN --mount=type=ssh \
  pip install -r requirements.txt
docker build --ssh default=$BITBUCKET_SSH_KEY_FILE .

(See https://support.atlassian.com/bitbucket-cloud/docs/run-docker-commands-in-bitbucket-pipelines/#Docker-BuildKit-restrictions)

As for the known_hosts content: to protect the image builder from being spoofed I'd recommend NOT accepting whatever key can be scanned in build time.

Either vendor (and keep under the VCS) the necessary ssh sever fingerprints in the docker building context or mount (not necessarily as a secret) an externally managed known_hosts. E.g. the one managed by Bitbucket Pipelines will already feature a maintained bitbucket.org ssh fingerprint: https://support.atlassian.com/bitbucket-cloud/docs/set-up-pipelines-ssh-keys-on-linux/#Update-the-known-hosts

RUN \
  --mount=type=ssh \
  --mount=type=bind,target=~/.ssh/known_hosts,source=known_hosts \
  pip install -r requirements.txt
cp ~/.ssh/known_hosts .
docker build --ssh default=$BITBUCKET_SSH_KEY_FILE .
N1ngu
  • 2,862
  • 17
  • 35
  • Thank you for the detailed explanation! I've updated my Dockerfile and bitbucket-pipelines.yml accordingly. I've also changed my `requirements.txt` file to this: ` @ git+ssh://git@bitbucket.org/.git` I'm still getting an error within the docker build command: `Host key verification failed. Could not read from remote repository. Please make sure you have the correct access rights and the repository exists.` Is it due to the known_hosts content? Do I need to add something to the private package's repo settings? – E. Pan Jun 16 '23 at 04:22
  • I've already added the public SSH key from the building repo to the package repo under Repository Settings > Access Keys – E. Pan Jun 16 '23 at 04:30
  • That is due to `known_hosts`. Did you use the `--mount=type=bind,target=~/.ssh/known_hosts,source=known_hosts` in the Dockerfile and the `cp ~/.ssh/known_hosts .` in the building script? I thought that should be enough. Maybe make sure the file is not excluded from the building context by a `.dockerignore` rule. – N1ngu Jun 16 '23 at 08:20
  • Oh this was a super useful tip! Made the mistake of not updating my Dockerfile command to `ssh-keyscan bitbucket.org`, left it as github.com initially. It works perfectly now, thank you!! – E. Pan Jun 19 '23 at 04:15