I am running docker-container on Amazon EC2. Currently I have added AWS Credentials to Dockerfile. Could you please let me know the best way to do this?
-
4How about if I am running a Docker container on my laptop which is supposed to also magically work in ECS when I push it there? I am gonna guess I use the --volume flag...someone must have already answered... – Randy L Jul 29 '18 at 04:43
-
Related: https://stackoverflow.com/questions/61367284/ecs-fargate-task-container-missing-aws-container-credentials-relative-uri?noredirect=1&lq=1 – Alex R Oct 10 '22 at 07:33
11 Answers
A lot has changed in Docker since this question was asked, so here's an attempt at an updated answer.
First, specifically with AWS credentials on containers already running inside of the cloud, using IAM roles as Vor suggests is a really good option. If you can do that, then add one more plus one to his answer and skip the rest of this.
Once you start running things outside of the cloud, or have a different type of secret, there are two key places that I recommend against storing secrets:
Environment variables: when these are defined on a container, every process inside the container has access to them, they are visible via /proc, apps may dump their environment to stdout where it gets stored in the logs, and most importantly, they appear in clear text when you inspect the container.
In the image itself: images often get pushed to registries where many users have pull access, sometimes without any credentials required to pull the image. Even if you delete the secret from one layer, the image can be disassembled with common Linux utilities like
tar
and the secret can be found from the step where it was first added to the image.
So what other options are there for secrets in Docker containers?
Option A: If you need this secret only during the build of your image, cannot use the secret before the build starts, and do not have access to BuildKit yet, then a multi-stage build is a best of the bad options. You would add the secret to the initial stages of the build, use it there, and then copy the output of that stage without the secret to your release stage, and only push that release stage to the registry servers. This secret is still in the image cache on the build server, so I tend to use this only as a last resort.
Option B: Also during build time, if you can use BuildKit which was released in 18.09, there are currently experimental features to allow the injection of secrets as a volume mount for a single RUN line. That mount does not get written to the image layers, so you can access the secret during build without worrying it will be pushed to a public registry server. The resulting Dockerfile looks like:
# syntax = docker/dockerfile:experimental
FROM python:3
RUN pip install awscli
RUN --mount=type=secret,id=aws,target=/root/.aws/credentials aws s3 cp s3://... ...
And you build it with a command in 18.09 or newer like:
DOCKER_BUILDKIT=1 docker build -t your_image --secret id=aws,src=$HOME/.aws/credentials .
Option C: At runtime on a single node, without Swarm Mode or other orchestration, you can mount the credentials as a read only volume. Access to this credential requires the same access that you would have outside of docker to the same credentials file, so it's no better or worse than the scenario without docker. Most importantly, the contents of this file should not be visible when you inspect the container, view the logs, or push the image to a registry server, since the volume is outside of that in every scenario. This does require that you copy your credentials on the docker host, separate from the deploy of the container. (Note, anyone with the ability to run containers on that host can view your credential since access to the docker API is root on the host and root can view the files of any user. If you don't trust users with root on the host, then don't give them docker API access.)
For a docker run
, this looks like:
docker run -v $HOME/.aws/credentials:/home/app/.aws/credentials:ro your_image
Or for a compose file, you'd have:
version: '3'
services:
app:
image: your_image
volumes:
- $HOME/.aws/credentials:/home/app/.aws/credentials:ro
Option D: With orchestration tools like Swarm Mode and Kubernetes, we now have secrets support that's better than a volume. With Swarm Mode, the file is encrypted on the manager filesystem (though the decryption key is often there too, allowing the manager to be restarted without an admin entering a decrypt key). More importantly, the secret is only sent to the workers that need the secret (running a container with that secret), it is only stored in memory on the worker, never disk, and it is injected as a file into the container with a tmpfs mount. Users on the host outside of swarm cannot mount that secret directly into their own container, however, with open access to the docker API, they could extract the secret from a running container on the node, so again, limit who has this access to the API. From compose, this secret injection looks like:
version: '3.7'
secrets:
aws_creds:
external: true
services:
app:
image: your_image
secrets:
- source: aws_creds
target: /home/user/.aws/credentials
uid: '1000'
gid: '1000'
mode: 0700
You turn on swarm mode with docker swarm init
for a single node, then follow the directions for adding additional nodes. You can create the secret externally with docker secret create aws_creds $HOME/.aws/credentials
. And you deploy the compose file with docker stack deploy -c docker-compose.yml stack_name
.
I often version my secrets using a script from: https://github.com/sudo-bmitch/docker-config-update
Option E: Other tools exist to manage secrets, and my favorite is Vault because it gives the ability to create time limited secrets that automatically expire. Every application then gets its own set of tokens to request secrets, and those tokens give them the ability to request those time limited secrets for as long as they can reach the vault server. That reduces the risk if a secret is ever taken out of your network since it will either not work or be quick to expire. The functionality specific to AWS for Vault is documented at https://www.vaultproject.io/docs/secrets/aws/index.html
-
-
-
it doesn't seem to work. This is the command I used `docker run -v $HOME/.aws/credentials:/home/app/.aws/credentials:ro -it -p 8080:8080 imageName:tagName`. The boto3 error message was `Unable to locate credentials`. I am not sure if it matters but the permission for credentials file is `ls -la $HOME/.aws/credentials` -rw------- – Jun Sep 16 '21 at 22:44
-
1@Jun711 if you're on linux, the uid of the file on the host needs to match the uid of the container user. Otherwise, I'd recommend posting a new question with a [mcve] to get help with your question. – BMitch Sep 16 '21 at 23:12
-
4I am on Mac, I changed the container path to root instead of `/home/app/` and it worked. `docker run -v $HOME/.aws/credentials:/root/.aws/credentials:ro -it -p 8080:8080 imageName:tagName` Do you know how I can access that root dir? I used `docker exec imageId ls -la` but I couldn't find my aws credentials file there – Jun Sep 16 '21 at 23:30
-
1Yes, worked for me: `docker run -d -e ASPNETCORE_ENVIRONMENT=development -v "%UserProfile%\.aws\credentials":/root/.aws/credentials:ro --name containerName imageName` – Gabriel Simas Oct 25 '21 at 05:39
The best way is to use IAM Role and do not deal with credentials at all. (see http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html )
Credentials could be retrieved from http://169.254.169.254.....
Since this is a private ip address, it could be accessible only from EC2 instances.
All modern AWS client libraries "know" how to fetch, refresh and use credentials from there. So in most cases you don't even need to know about it. Just run ec2 with correct IAM role and you good to go.
As an option you can pass them at the runtime as environment variables ( i.e docker run -e AWS_ACCESS_KEY_ID=xyz -e AWS_SECRET_ACCESS_KEY=aaa myimage
)
You can access these environment variables by running printenv at the terminal.

- 18,402
- 15
- 86
- 123

- 33,215
- 43
- 135
- 193
-
75Is there a good way to do this during local development/testing that doesn't compromise security in production? I'd love to make sure an image works without deploying it fully. – honktronic Nov 30 '16 at 01:01
-
3an alternative that I posted with environment variables works fine in dev/local environment. – Vor Nov 30 '16 at 01:52
-
6I wonder if this is a typo, but I need to enter `AWS_SECRET_ACCESS_KEY`, not `AWS_SECRET_KEY`, anyway your answer was very helpful. Thank You. – Akavall Feb 05 '17 at 20:39
-
@honktronic - I asked myself the same question and came up with this: https://stackoverflow.com/a/49956609/13087 – Joe Apr 21 '18 at 14:31
-
29To put it simply (for those who get to this answer the same way I did); A docker container running on EC2 will inherit the same role as the host instance. (I needed an "ELI5" like this when AWS CLI commands in my containers mysteriously worked despite there being no credentials passed to them!) – Adam Westbrook Sep 12 '18 at 17:12
-
12Easy way to get the key values from your local profile to assign to environment variable for development purposes (as suggested in https://cameroneckelberry.co/words/getting-aws-credentials-into-a-docker-container-without-hardcoding-it): "aws --profile default configure get aws_access_key_id" – Altair7852 Jan 04 '19 at 20:13
-
Will this work for windows docker containers or just linux containers? We are dealing with windows docker containers running in windows EC2 and ECS with EC2 windows launch type . Using AWS SDK can i access an AWS service(ex secret manager) by assumed role of host EC2 instance ? – user793886 Sep 25 '20 at 09:51
-
-
1while passing in the key and secret via env vars is definitely the easiest, it's discouraged, as you'll now have a dependency on long term credentials. IMO, this whole discussion shines a bright light on the horrible disconnect presented (and not at all addressed in the AWS SDK docs) between using the sdk locally, during development, and in production on one of their instances in whatever service you may be using. It's unfortunate, to say the least. I mean, just reading the work arounds demonstrates how crappy this is. Local, local with docker, containerized deployment... all different... meh – wkhatch Aug 03 '21 at 18:37
-
1
-
Since I was doing local testing I used @BMitch's answer above. https://stackoverflow.com/a/56077990/175759 Specifically the part where you use `docker run -v $HOME/.aws/credentials:/home/app/.aws/credentials:ro your_image` or the alternative for a docker-compose file. – gaoagong Aug 17 '22 at 23:55
Yet another approach is to create temporary read-only volume in docker-compose.yaml. AWS CLI and SDK (like boto3 or AWS SDK for Java etc.) are looking for default
profile in ~/.aws/credentials
file.
If you want to use other profiles, you just need also to export AWS_PROFILE variable before running docker-compose
command.
export AWS_PROFILE=some_other_profile_name
version: '3'
services:
service-name:
image: docker-image-name:latest
environment:
- AWS_PROFILE=${AWS_PROFILE}
volumes:
- ~/.aws/:/root/.aws:ro
In this example, I used root user on docker. If you are using other user, just change /root/.aws
to user home directory.
:ro
- stands for read-only docker volume
It is very helpful when you have multiple profiles in ~/.aws/credentials
file and you are also using MFA. Also helpful when you want to locally test docker-container before deploying it on ECS on which you have IAM Roles, but locally you don't.

- 9,564
- 146
- 81
- 122

- 962
- 7
- 16
-
2On windows .aws` catalogue is located `"%UserProfile%\.aws"`. So I assume that you have to change: `- ~/.aws/:/root/.aws:ro` to `- %UserProfile%\.aws:/root/.aws:ro` – Artur Siepietowski Oct 05 '19 at 07:19
-
1This will only work with single build processes and not multistage. – wlarcheveque Dec 05 '19 at 14:32
-
1
-
1Be VERY careful using the `- host:container` syntax, if the file/folder doesn't exist on the host it gets created (as root) and the awscli won't thank you for feeding it a zero byte file. You should use the "long form" that specifies the type is bind, the host path, and the container path on separate lines, this fails if the file doesn't exist, which is what you want in your docker-compose.dev.yml but not in your docker-compose.yml (prod/AWS deploy). – dragon788 Jun 22 '20 at 12:48
Another approach is to pass the keys from the host machine to the docker container. You may add the following lines to the docker-compose
file.
services:
web:
build: .
environment:
- AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
- AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}
-
8The correct region environment variable is AWS_REGION. See https://stackoverflow.com/questions/44151982/aws-java-sdk-unable-to-find-a-region-via-the-region-provider-chain – John Camerin Mar 20 '19 at 12:32
-
5Please check the official doc which mentions `AWS_DEFAULT_REGION` https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html – prafi Mar 20 '19 at 17:10
-
10When I used AWS_DEFAULT_REGION, I got an exception that a default region could not be found. My search led to https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/setup-credentials.html which specifies AWS_REGION environment variable, and that worked for me. – John Camerin Mar 22 '19 at 01:19
-
2If you are using temporary credentials then you may also need `AWS_SESSION_TOKEN=${AWS_SESSION_TOKEN}` – Davos Apr 09 '20 at 17:22
-
3do you need to export AWS_ACCESS_KEY_ID etc. using `export AWS_ACCESS_KEY_ID="myaccesskeyid"? the AWS_ACCESS_KEY_ID env var was undefined for me. – piedpiper Jan 08 '21 at 01:19
-
1Note that different SDKs use different environment variables to determine region. The python (boto3) and JavaScript SDKs look for `AWS_DEFAULT_REGION`, while the Java SDK looks for `AWS_REGION`, which probably accounts for the differences @JohnCamerin and @prafi encountered. Yes, this is certifiably insane. – Peter Halverson Dec 30 '22 at 15:35
-
note: cis recommendations and best practices in general ... would suggest ... DO NOT put secrets in your environment. Use files and read from them, where/when you need those. Setting global env vars like this, all your processes within that container do have access to those data, always, ... and they could leak, as part of a crash/trace/debug-data... – SYN May 07 '23 at 07:02
The following one-liner works for me even when my credentials are set up by aws-okta or saml2aws:
$ docker run -v$HOME/.aws:/root/.aws:ro \
-e AWS_ACCESS_KEY_ID \
-e AWS_CA_BUNDLE \
-e AWS_CLI_FILE_ENCODING \
-e AWS_CONFIG_FILE \
-e AWS_DEFAULT_OUTPUT \
-e AWS_DEFAULT_REGION \
-e AWS_PAGER \
-e AWS_PROFILE \
-e AWS_ROLE_SESSION_NAME \
-e AWS_SECRET_ACCESS_KEY \
-e AWS_SESSION_TOKEN \
-e AWS_SHARED_CREDENTIALS_FILE \
-e AWS_STS_REGIONAL_ENDPOINTS \
amazon/aws-cli s3 ls
Please note that for advanced use cases you might need to allow rw
(read-write) permissions, so omit the ro
(read-only) limitation when mounting the .aws
volume in -v$HOME/.aws:/root/.aws:ro

- 39,472
- 36
- 165
- 245
-
I was using something like `-e AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}`, but I just realized from you answer here that if they are named the same I can leave it out. Thx. – gaoagong Aug 12 '22 at 21:52
-
I did something similar to above with the `-e AWS` stuff and got it working, but I didn't include the `-v $HOME/.aws:/root/.aws:ro` because I couldn't get it working by itself as another solution here suggested. Why do you include both the `-v` line and the `-e` lines here? – gaoagong Aug 12 '22 at 21:53
Volume mounting is noted in this thread but as of docker-compose v3.2 +
you can Bind Mount.
For example, if you have a file named .aws_creds
in the root of your project:
In your service for the compose file do this for volumes:
volumes:
# normal volume mount, already shown in thread
- ./.aws_creds:/root/.aws/credentials
# way 2, note this requires docker-compose v 3.2+
- type: bind
source: .aws_creds # from local
target: /root/.aws/credentials # to the container location
Using this idea, you can publicly store your docker images on docker-hub because your aws credentials
will not physically be in the image...to have them associated, you must have the correct directory structure locally where the container is started (i.e. pulling from Git)

- 2,143
- 1
- 16
- 51
This will work for local development, and uses your ~/.aws
directory and/or can be overridden with environment variables or temporary access tokens.
Temporary access tokens are the preferred method for local development: https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html
version: '3.8'
services:
my-container-name:
environment:
- AWS_ACCESS_KEY_ID
- AWS_SECRET_ACCESS_KEY
- AWS_SESSION_TOKEN
- AWS_PROFILE
- AWS_REGION
- AWS_DEFAULT_REGION=us-east-1
secrets:
- source: aws
target: /home/appuser/.aws
uid: "1000"
gid: "1000"
mode: 0700
secrets:
aws:
file: "~/.aws"
The above assumes you are not running your processes as root
(please don't!), and are instead running them as appuser
. If you really want to run them as root, replace /home/appuser/
with /root/
I recommend adding this to your Dockerfile
, usually right before the entrypoint/cmd lines:
RUN groupadd -r -g 1000 appuser && useradd -m -r -u 1000 -g appuser appuser
USER appuser
For actual production deployments, assuming K8s, you should make use of IAM Roles and Kubernetes service accounts: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ Better explainer: https://aws.amazon.com/blogs/containers/diving-into-iam-roles-for-service-accounts/

- 661
- 6
- 7
You could create ~/aws_env_creds
containing:
touch ~/aws_env_creds
chmod 777 ~/aws_env_creds
vi ~/aws_env_creds
Add these value (replace the key of yours):
AWS_ACCESS_KEY_ID=AK_FAKE_KEY_88RD3PNY
AWS_SECRET_ACCESS_KEY=BividQsWW_FAKE_KEY_MuB5VAAsQNJtSxQQyDY2C
Press "esc" to save the file.
Run and test the container:
my_service:
build: .
image: my_image
env_file:
- ~/aws_env_creds

- 9,564
- 146
- 81
- 122

- 579
- 5
- 7
-
It's a working solution, I would avoid to set the file with `777` permissions, as any other user with access to the host will be able to read the credentials file... Not very good, as the point of using env variables is to keep crendentials away from anyone/anything that is not the aws service that needs them! Maybe [744 is more appropriate](https://chmodcommand.com/chmod-744/) – funder7 Feb 02 '22 at 13:47
If someone still face the same issue after following the instructions mentioned in accepted answer then make sure that you are not passing environment variables from two different sources. In my case I was passing environment variables to docker run
via a file and as parameters which was causing the variables passed as parameters show no effect.
So the following command did not work for me:
docker run --env-file ./env.list -e AWS_ACCESS_KEY_ID=ABCD -e AWS_SECRET_ACCESS_KEY=PQRST IMAGE_NAME:v1.0.1
Moving the aws credentials into the mentioned env.list
file helped.

- 109
- 1
- 9
for php apache docker the following command works
docker run --rm -d -p 80:80 --name my-apache-php-app -v "$PWD":/var/www/html -v ~/.aws:/.aws --env AWS_PROFILE=mfa php:7.2-apache

- 8,207
- 20
- 85
- 176
Based on some of previous answers, I built my own as follows. My project structure:
├── Dockerfile
├── code
│ └── main.py
├── credentials
├── docker-compose.yml
└── requirements.txt
My docker-compose.yml
file:
version: "3"
services:
app:
build:
context: .
volumes:
- ./credentials:/root/.aws/credentials
- ./code:/home/app
My Docker
file:
FROM python:3.8-alpine
RUN pip3 --no-cache-dir install --upgrade awscli
RUN mkdir /app
WORKDIR /home/app
CMD python main.py

- 477
- 5
- 11