7

So, i'm trying not to put sensitive information on the dockerfile. A logical approach is to put the creds in the ebs configuration (the GUI) as a ENV variable. However, docker build doesn't seem to be able to access the ENV variable. Any thoughts?

enter image description here

FROM jupyter/scipy-notebook

USER root

ARG AWS_ACCESS_KEY_ID
RUN echo {$AWS_ACCESS_KEY_ID}
William Falcon
  • 9,813
  • 14
  • 67
  • 110
  • 1
    These environment variables are supposed to be used only in runtime. You can create a simple shell script to run when you Docker container is created and access those variables there – sap1ens Feb 26 '17 at 01:47
  • Did you ever figure out a way to achieve this? – danbrellis Aug 16 '23 at 19:23
  • @danbrellis I have rewritten my [6 years old answer](https://stackoverflow.com/a/42463668/6309), with another proposal to manage runtime secrets. – VonC Aug 16 '23 at 19:42

2 Answers2

1

I assume that for every deployment you create a new Dockerrun.aws.json file with the correct docker image tag for that deployment. At deployment stage, you can inject environment values which will then be used in docker run command by EB agent. So your docker containers can now access to these environment variables.

Cagatay Gurturk
  • 7,186
  • 3
  • 34
  • 44
  • Can you provide an example of such file with variables defined? And how would that differ from hardcoding ENV values into Dockerfile? – Bostone Oct 16 '17 at 03:27
  • 1
    In the link above you can see how to inject environments in the JSON file. If you embed the environment variables to your docker image, then your image will be tightly coupled to your environment. If you inject them via Dockerrun.aws.json then your image will be able to used in any environment. Also keep in mind that it's not the best idea to inject sensitive information via Dockerfile, given that it's very easy to inspect the image and see the values. So anybody who can pull your image can also see your credentials for example. – Cagatay Gurturk Oct 17 '17 at 09:34
  • 1
    I am totally looking for this and I really don't want to add secrets in the Docker file. However, I miss on how to inject environment values in the deployment stage. Can you please help me with this? – Anand Oct 15 '21 at 16:19
-1

Using secrets or sensitive information in Docker encompasses two potential timeframes:

  1. Build time: When the Docker image is being constructed.
  2. Runtime: When a container from the Docker image is executing.

For runtime secrets in AWS Elastic Beanstalk:

Elastic Beanstalk environment variables can be used to pass runtime secrets to the application. These variables can be set via the AWS Management Console, EB CLI, or AWS SDKs and are injected into the Docker container at runtime. Your application can then read these environment variables.

However, for build time secrets:

  • Docker has a native mechanism to pass build-time secrets using the --build-arg parameter during the docker build command. That approach uses the ARG instruction in the Dockerfile.

  • Yet, Elastic Beanstalk's Docker integration, as of the last update, does not allow for the passing of build arguments during the image build process. That means you cannot use the ARG mechanism for build-time secrets when deploying with Elastic Beanstalk.

As such, if your use case specifically requires build-time secrets with Elastic Beanstalk, you might need to consider alternative strategies:

  1. Pre-built Images: Build your Docker images in a secure environment where you can use build-time secrets. Push these pre-built images to a private registry (like Amazon ECR) and then reference these images in Elastic Beanstalk.
    Remember: Storing or baking secrets into Docker images is a security risk. Instead, prefer methods that inject secrets at runtime or grant permissions without explicit credentials.
  2. IAM roles: For AWS specific secrets (like access to S3 or DynamoDB), instead of passing secrets, attach an IAM role to your Elastic Beanstalk environment's EC2 instances that grants the necessary permissions. AWS SDKs inside your container will automatically use these permissions without needing explicit credentials.
    This is used for instance here for accessing the AWS secret Manager
VonC
  • 1,262,500
  • 529
  • 4,410
  • 5,250
  • so, in the context of elastic beanstalk, how do you call docker build? – William Falcon Feb 26 '17 at 01:29
  • @WilliamFalcon `docker build --build-arg AWS_ACCESS_KEY_ID=xxx` – VonC Feb 26 '17 at 01:29
  • 4
    but where is this command called from? .ebextensions? a random file? dockerfile? the ebs config GUI? (also, if on .ebextensions or dockerfile, it defeats the purpose of not committing keys or credentials) – William Falcon Feb 26 '17 at 01:30
  • @WilliamFalcon you could build your image locally and upload it to ECR, as described in https://aws.amazon.com/blogs/compute/running-swift-web-applications-with-amazon-ecs/ – VonC Feb 26 '17 at 01:35
  • @WilliamFalcon another example: https://github.com/awslabs/ecs-deep-learning-workshop – VonC Feb 26 '17 at 01:37
  • 3
    Sadly you can't feed ARGs into EB based Dockerfile as of right now. Or so it seems. What they should do is to pass all defined environment variables to `docker build` automatically but they don't. Or so it seems – Bostone Oct 16 '17 at 03:26