1309

How can one access an external database from a container? Is the best way to hard code in the connection string?

# Dockerfile
ENV DATABASE_URL amazon:rds/connection?string
Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
AJcodez
  • 31,780
  • 20
  • 84
  • 118

21 Answers21

1984

You can pass environment variables to your containers with the -e flag.

An example from a startup script:

sudo docker run -d -t -i -e REDIS_NAMESPACE='staging' \ 
-e POSTGRES_ENV_POSTGRES_PASSWORD='foo' \
-e POSTGRES_ENV_POSTGRES_USER='bar' \
-e POSTGRES_ENV_DB_NAME='mysite_staging' \
-e POSTGRES_PORT_5432_TCP_ADDR='docker-db-1.hidden.us-east-1.rds.amazonaws.com' \
-e SITE_URL='staging.mysite.com' \
-p 80:80 \
--link redis:redis \  
--name container_name dockerhub_id/image_name

Or, if you don't want to have the value on the command-line where it will be displayed by ps, etc., -e can pull in the value from the current environment if you just give it without the =:

sudo PASSWORD='foo' docker run  [...] -e PASSWORD [...]

If you have many environment variables and especially if they're meant to be secret, you can use an env-file:

$ docker run --env-file ./env.list ubuntu bash

The --env-file flag takes a filename as an argument and expects each line to be in the VAR=VAL format, mimicking the argument passed to --env. Comment lines need only be prefixed with #

Zorawar
  • 6,505
  • 2
  • 23
  • 41
errata
  • 23,596
  • 2
  • 22
  • 32
  • Is there an easier way to do this? It's really irritating having to re-create the container with different variables every time. Maybe store it in a file? – Jason Axelson Jul 27 '16 at 19:58
  • 43
    I store the docker run commands in shell scripts, (./start_staging.sh etc..) then execute them remotely using Ansible. – errata Jul 27 '16 at 21:39
  • 1
    I'm having trouble getting the second version to work; I set PASSWORD=foo in the environment, then pass --env PASSWORD, and only the word "PASSWORD" shows up in the container's config.json; every other environment variable has a key and a value. I'm using Docker 1.12.1. – Kevin Burke Sep 20 '16 at 17:01
  • 11
    @KevinBurke: Do `export PASSWORD=foo` instead and the variable will be passed to `docker run` as an environment variable, making `docker run -e PASSWORD` work. – qerub Nov 25 '16 at 10:28
  • 2
    just to be clear, `-e` in the command line and `ENV` in the Dockerfile do the same thing? – Inderpartap Cheema Oct 07 '20 at 18:50
  • Let's say I'm using a `.env` file, and specify env-file in nginx image in `docker-compose.yml`. Everything works great. Now How do I make this .env available in production each time a new Container will be created automatically with Kubernetes without exposing my secret passwords and keys? – KeitelDOG Dec 17 '20 at 17:32
  • 52
    The painful thing I learned is that you should pass all `-e` values before the name of the docker image otherwise no error will be raised and none of the variables will have a value! – Jalal Feb 15 '21 at 21:44
  • What is the "-d" flag? – Steffi Keran Rani J Apr 27 '21 at 17:10
  • `-d` or `--detached` flag serves for "Run container in background and print container ID" as described in `docker run --help` output. You can find more details in the [proper documentation](https://docs.docker.com/engine/reference/run/#detached-vs-foreground) – artu-hnrq Jun 22 '21 at 02:08
  • How can I use this without run command? container already running perfectly. I dont want to remove the container or runing the commit command. How can I run on a container that is already running? – withoutOne Apr 26 '22 at 13:49
  • FYI: The `.env` file variables shouldn't have any quotes. `MY_VAR=variablecontent` – Gidi9 Jul 12 '23 at 11:56
132

You can pass using -e parameters with the docker run .. command as mentioned here and as mentioned by errata.

However, the possible downside of this approach is that your credentials will be displayed in the process listing, where you run it.

To make it more secure, you may write your credentials in a configuration file and do docker run with --env-file as mentioned here. Then you can control the access of that configuration file so that others having access to that machine wouldn't see your credentials.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Sabin
  • 11,662
  • 3
  • 25
  • 39
  • 2
    I added another way to address this concern to @errata's answer. – Bryan May 29 '15 at 12:49
  • 43
    Be careful of `--env-file`, when you use `--env` your env values will be quoted/escaped with standard semantics of whatever shell you're using, but when using `--env-file` the values you will get inside your container will be different. The docker run command just reads the file, does very basic parsing and passes the values through to the container, it's not equivalent to the way your shell behaves. Just a small gotcha to be aware of if you're converting a bunch of `--env` entries to an `--env-file`. – Shorn Oct 04 '16 at 23:50
  • 8
    To elaborate on Shorn answer, when using the env-file, I had to put a very long environment variable's value all on one line since there doesn't appear to be any way to put a line break in it, or divide it up into multiple lines such as: $MY_VAR=stuff $MY_VAR=$MY_VAR more stuff – Jason White Nov 18 '16 at 15:54
85

Use -e or --env value to set environment variables (default []).

An example from a startup script:

 docker run  -e myhost='localhost' -it busybox sh

If you want to use multiple environments from the command line then before every environment variable use the -e flag.

Example:

 sudo docker run -d -t -i -e NAMESPACE='staging' -e PASSWORD='foo' busybox sh

Note: Make sure put the container name after the environment variable, not before that.

If you need to set up many variables, use the --env-file flag

For example,

 $ docker run --env-file ./my_env ubuntu bash

For any other help, look into the Docker help:

 $ docker run --help

Official documentation: https://docs.docker.com/compose/environment-variables/

godblessstrawberry
  • 4,556
  • 2
  • 40
  • 58
Vishnu Mishra
  • 3,683
  • 2
  • 25
  • 36
  • 4
    Why do we need `ubuntu bash`? Does it apply for images created with ubuntu as base image or to every image? – Reyansh Kharga Oct 12 '19 at 07:39
  • @ReyanshKharga ubuntu is the name of the image and bash is the command you are executing. `bash` gives you a terminal (although I think you need -it for an interactive terminal). – Brandon Sep 23 '20 at 07:21
83

If you are using 'docker-compose' as the method to spin up your container(s), there is actually a useful way to pass an environment variable defined on your server to the Docker container.

In your docker-compose.yml file, let's say you are spinning up a basic hapi-js container and the code looks like:

hapi_server:
  container_name: hapi_server
  image: node_image
  expose:
    - "3000"

Let's say that the local server that your docker project is on has an environment variable named 'NODE_DB_CONNECT' that you want to pass to your hapi-js container, and you want its new name to be 'HAPI_DB_CONNECT'. Then in the docker-compose.yml file, you would pass the local environment variable to the container and rename it like so:

hapi_server:
  container_name: hapi_server
  image: node_image
  environment:
    - HAPI_DB_CONNECT=${NODE_DB_CONNECT}
  expose:
    - "3000"

I hope this helps you to avoid hard-coding a database connect string in any file in your container!

Pang
  • 9,564
  • 146
  • 81
  • 122
Marquistador
  • 1,841
  • 19
  • 26
  • 8
    This won't work. Those variables are not passed to the container. – Frondor May 11 '18 at 00:17
  • 1
    @Frondor really? According to these [docs](https://docs.docker.com/compose/environment-variables/) it seems like it should. – darda May 05 '19 at 13:59
  • 5
    The problem with this approach is that you commit the environment variables in the docker-compose.yml file to the git repository which you should not. How do you go around this? ideally you would have a separate env file that is gitignored and can import/load into the Dockerfile or docker-compose.yml – Khaled Osman Sep 26 '19 at 14:49
  • The last snippet enabled me to identify one development compose/environment and the COMPUTERNAME, which this instance uses to communicate with another development compose, thanks. – Gunnar Feb 22 '23 at 08:25
52

Using docker-compose, you can inherit env variables in docker-compose.yml and subsequently any Dockerfile(s) called by docker-compose to build images. This is useful when the Dockerfile RUN command should execute commands specific to the environment.

(your shell has RAILS_ENV=development already existing in the environment)

docker-compose.yml:

version: '3.1'
services:
  my-service: 
    build:
      #$RAILS_ENV is referencing the shell environment RAILS_ENV variable
      #and passing it to the Dockerfile ARG RAILS_ENV
      #the syntax below ensures that the RAILS_ENV arg will default to 
      #production if empty.
      #note that is dockerfile: is not specified it assumes file name: Dockerfile
      context: .
      args:
        - RAILS_ENV=${RAILS_ENV:-production}
    environment: 
      - RAILS_ENV=${RAILS_ENV:-production}

Dockerfile:

FROM ruby:2.3.4

#give ARG RAILS_ENV a default value = production
ARG RAILS_ENV=production

#assign the $RAILS_ENV arg to the RAILS_ENV ENV so that it can be accessed
#by the subsequent RUN call within the container
ENV RAILS_ENV $RAILS_ENV

#the subsequent RUN call accesses the RAILS_ENV ENV variable within the container
RUN if [ "$RAILS_ENV" = "production" ] ; then echo "production env"; else echo "non-production env: $RAILS_ENV"; fi

This way, I don't need to specify environment variables in files or docker-compose build/up commands:

docker-compose build
docker-compose up
Pang
  • 9,564
  • 146
  • 81
  • 122
joshweir
  • 5,427
  • 3
  • 39
  • 59
  • 1
    Do they have to be the same name? Seems kinda confusing.. And how would I override the args if I want to run development instead? – CyberMew Jun 14 '18 at 11:06
  • @CyberMew Yes they have to be same name between your environment, docker-compose and Dockerfile. If you want to run development instead, before running docker-compose build, run RAILS_ENV=development in your terminal to set the environment variable, that way docker-compose and in turn Dockerfile will inherit that value from your environment. – joshweir Jun 14 '18 at 11:27
34

We can also use host machine environment variables using the -e flag and $:

Before running the following command, we need to export (means set) local environment variables.

docker run -it -e MG_HOST=$MG_HOST \
    -e MG_USER=$MG_USER \
    -e MG_PASS=$MG_PASS \
    -e MG_AUTH=$MG_AUTH \
    -e MG_DB=$MG_DB \
    -t image_tag_name_and_version

By using this method, you can set the environment variables automatically with your given name. In my case, MG_HOST and MG_USER.

Additionally:

If you are using Python, you can access these environment variables inside Docker by:

import os

host = os.environ.get('MG_HOST')
username = os.environ.get('MG_USER')
password = os.environ.get('MG_PASS')
auth = os.environ.get('MG_AUTH')
database = os.environ.get('MG_DB')
Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Mobin Al Hassan
  • 954
  • 11
  • 22
  • 2
    In case anyone still had problems with this `docker run` command, It is worth noting that the env variables `-e` must be BEFORE `-t` as shown. I had mine placed after the image and it wasn't working. – MaxiJonson Mar 13 '22 at 18:54
  • Yes, you should use -e flag before -t flag but if you share an error then we can understand better... – Mobin Al Hassan Mar 14 '22 at 01:11
  • 1
    That's the thing, there were no errors. The environment variables are simply not passed. I couldn't figure out what was the problem until I switched the order like you've shown in the answer. – MaxiJonson Mar 15 '22 at 03:17
23

There is a nice hack how to pipe host machine environment variables to a Docker container:

env > env_file && docker run --env-file env_file image_name

Use this technique very carefully, because env > env_file will dump ALL host machine ENV variables to env_file and make them accessible in the running container.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Alex T
  • 4,331
  • 3
  • 29
  • 47
16

The problem I had was that I was putting the --env-file at the end of the command

docker run -it --rm -p 8080:80 imagename --env-file ./env.list

Fix

docker run --env-file ./env.list -it --rm -p 8080:80 imagename

The reason this is the case is because the docker run command has the below signature. You can see that the options come before the image name. Image name feels like an option but it is a parameter to the run command.

docker run [OPTIONS] IMAGE [COMMAND] [ARG...]

Docker Run

T Brown
  • 1,385
  • 13
  • 9
  • I did this silly mistake. Thanks @T Brown. – Paresh Dudhat May 09 '21 at 16:07
  • ***Why*** does that make a difference? Is there an explanation? E.g, by reference to the documentation? Please respond by [editing (changing) your answer](https://stackoverflow.com/posts/64159568/edit), not here in comments (******** ***without*** ******** "Edit:", "Update:", or similar - the answer should appear as if it was written today). – Peter Mortensen Dec 29 '22 at 01:41
  • I made the same mistake. The `-e` should come before the image and container names. – Promise Preston May 14 '23 at 07:12
7

Another way is to use the powers of /usr/bin/env:

docker run ubuntu env DEBUG=1 path/to/script.sh
sanmai
  • 29,083
  • 12
  • 64
  • 76
7

docker run --rm -it --env-file <(bash -c 'env | grep <your env data>') Is a way to grep the data stored within a .env and pass them to Docker, without anything being stored unsecurely (so you can't just look at docker history and grab keys.

Say you have a load of AWS stuff in your .env like so:

AWS_ACCESS_KEY: xxxxxxx
AWS_SECRET: xxxxxx
AWS_REGION: xxxxxx

running docker with docker run --rm -it --env-file <(bash -c 'env | grep AWS_') will grab it all and pass it securely to be accessible from within the container.

Daniel Compton
  • 13,878
  • 4
  • 40
  • 60
6

For Amazon AWS ECS/ECR, you should manage your environment variables (especially secrets) via a private S3 bucket. See blog post How to Manage Secrets for Amazon EC2 Container Service–Based Applications by Using Amazon S3 and Docker.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Joseph Juhnke
  • 854
  • 9
  • 9
6

If you have the environment variables in an env.sh locally and want to set it up when the container starts, you could try

COPY env.sh /env.sh
COPY <filename>.jar /<filename>.jar
ENTRYPOINT ["/bin/bash" , "-c", "source /env.sh && printenv && java -jar /<filename>.jar"]

This command would start the container with a bash shell (I want a bash shell since source is a bash command), sources the env.sh file(which sets the environment variables) and executes the jar file.

The env.sh looks like this,

#!/bin/bash
export FOO="BAR"
export DB_NAME="DATABASE_NAME"

I added the printenv command only to test that actual source command works. You should probably remove it when you confirm the source command works fine or the environment variables would appear in your docker logs.

Pang
  • 9,564
  • 146
  • 81
  • 122
akilesh raj
  • 656
  • 1
  • 8
  • 19
  • 3
    with this approach you'll have to rebuild you docker image each time you want to pass different/modified env set. Passing envs during "docker --run --env-file ./somefile.txt" is superior/dynamic approach. – Dmitry Shevkoplyas Mar 23 '18 at 17:57
  • 2
    @DmitryShevkoplyas I agree. My use case is where there is no option of specifying the `--env-file` arg to a `docker run` command. For example, if you are deploying an application using Google app engine and the app running inside the container needs environment variables set inside the docker container, you do not have a direct approach to set the environment variables since you do not have control over the `docker run` command. In such a case, you could have a script that decrypts the env variables using say, KMS, and adds them to the `env.sh` which can be sourced to set the env variables. – akilesh raj Mar 25 '18 at 08:19
  • you can use the POSIX `.` (dot) command available in regular `sh` instead of `source`. (`source` is the same as `.`) – go2null Mar 24 '19 at 14:04
  • Be aware that signals will not reach your executable if you wrap the call in a shell command. Basically, you will not be able to interrupt your process without some extra bash fu. – David Weber Feb 20 '21 at 06:21
5

For passing multiple environment variables via docker-compose an environment file can be used in docker-compose file as well.

web:
 env_file:
  - web-variables.env

https://docs.docker.com/compose/environment-variables/#the-env_file-configuration-option

Dark Light
  • 1,210
  • 11
  • 19
3

You can use -e or --env as an argument, followed by a key-value format.

For example:

docker build -f file_name -e MYSQL_ROOT_PASSWORD=root
Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
yash bhangare
  • 319
  • 2
  • 9
2

Using jq to convert the environment to JSON:

env_as_json=`jq -c -n env`
docker run -e HOST_ENV="$env_as_json" <image>

This requires jq version 1.6 or newer.

This puts the host environment as a JSON file, essentially like so in Dockerfile:

ENV HOST_ENV  (all environment from the host as JSON)
Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Alexander Mills
  • 90,741
  • 139
  • 482
  • 817
  • 1
    How is that line working for you?: `docker run -e HOST_ENV="$env_as_json" ` ? In my case Docker doesn't seem to be resolving variables or subshells (`${}` or `$()`) when passed as docker args. For example: `A=123 docker run --rm -it -e HE="$A" ubuntu` then inside that container: `root@947c89c79397:/# echo $HE root@947c89c79397:/# ` .... The `HE` variable doesn't make it. – Perplexabot Feb 20 '20 at 00:41
2

There are several ways to pass environment variables to the container including using docker-compose (best choice if possible).

I recommend using an env file for easier organization and maintenance.

EXAMPLE (docker-compose CLI)

docker-compose -f docker-compose.yml --env-file ./.env up

EXAMPLE (docker CLI)

docker run -it --name "some-ctn-name" --env-file ./.env "some-img-name:Dockerfile"

IMPORTANT: The docker CLI has some limitations regarding (see below) environment variables.

ISSUE: Docker run and environment variables with quotes and double quotes

The docker run subcommand strangely does not accept env files formatted as valid BASH ("Shell") scripts so it considers surrounding quotes and double quotes as part of the value of environment variables, so the container will get the value of (in an env file, for example)...

SOME_ENV_VAR_A="some value a"

... as "some value a" and not some value a. Other than that, we'll have problems using the same env file in other contexts (including BASH itself).

This is quite strange behavior since .env files are regular BASH ("Shell") scripts.

However, BASH ("Shell") offers us powerful features, so let's use it to our advantage in a workaround solution.

My solution involves a Dockerfile, an env file, a BASH script file and the run subcommand (docker run) in a special way.

The strategy consists of injecting your environment variables using another environment variable set in the run subcommand and using the container itself to set these variables.

Workaround Solution

Create a Dockerfile

EXAMPLE

FROM python:3.10-slim-buster
WORKDIR /some-name
COPY . /some-name/
RUN apt-get -y update \
    && apt-get -y upgrade \
    [...]
ENTRYPOINT bash entrypoint.bash

Create an env file (BASH script file) (.env)

EXAMPLE

#!/bin/bash

# Some description a
SOME_ENV_VAR_A="some value a"

# Some description b
SOME_ENV_VAR_B="some value b"

# Some description c
SOME_ENV_VAR_C="some value c"
[...]

Create a BASH script file for the ENTRYPOINT (entrypoint.bash)

EXAMPLE

#!/bin/bash

set -a;source <(echo -n "$ENV_VARS");set +a
python main.py

Injecting your environment variables using the run subcommand

EXAMPLE

docker run -it --name "some-ctn-name" --env ENV_VARS="$(cat ./.env)" "some-img-name:Dockerfile"

PLUS

The docker-compose does not have this problem as it uses YAML. YAML does not consider surrounding quotes and double quotes as part of the value of environment variables, which is something that is not done with docker run subcommand.

Tks!

Eduardo Lucio
  • 1,771
  • 2
  • 25
  • 43
1

Here is how I was able to solve it:

docker run --rm -ti -e AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY -e AWS_SESSION_TOKEN -e AWS_SECURITY_TOKEN amazon/aws-cli s3 ls

One more example:

export VAR1=value1
export VAR2=value2

docker run --env VAR1 --env VAR2 ubuntu env | grep VAR

Output:

VAR1=value1
VAR2=value2
Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
EDU_EVER
  • 47
  • 1
  • 8
0

There are some documentation inconsistencies for setting environment variables with docker run.

The online referece says one thing:

--env , -e Set environment variables

The manpage is a little different:

-e, --env=[] Set environment variables

The docker run --help gives something else again:

-e, --env list Set environment variables


Something that isn't necessarily clear in any of the available documentation:

A trailing space after -e or --env can be replaced by =, or in the case of -e can be elided altogether:

$ docker run -it -ekey=value:1234 ubuntu env
key=value:1234

A trick that I found by trial and error (and clues in the above)...

If you get the error:

unknown flag: --env

Then you may find it helpful to use an equals sign with --env, for example:

--env=key=value:1234

Different methods of launching a container may have different parsing scenarios.


These tricks may be helpful when using Docker in various composing configurations, such as Visual Studio Code devcontainer.json, where spaces are not allowed in the runArgs array.

Brent Bradburn
  • 51,587
  • 17
  • 154
  • 173
0

Easiest solution: Just run these commands

sudo docker container run -p 3306:3306 -e MYSQL_RANDOM_ROOT_PASSWORD=yes --name mysql -d mysql
sudo docker container logs mysql

What is happening there?

  • First command is running the mysql container with a random password.
  • Second command is showing the logs of the container where you will be able to find what random password is provided.

Explicit solution: Here we can not only pass our own password and database name but also can create a specific network through which any application will interact with this database. Moreover, we can also access the docker database and see it's contents. Please see below

docker network create todo-app
docker run -d \
     --network todo-app --network-alias mysql \
     -v todo-mysql-data:/var/lib/mysql \
     -e MYSQL_ROOT_PASSWORD=secret \
     -e MYSQL_DATABASE=todos \
     mysql:8.0
docker exec -it <mysql-container-id> mysql -u root -p
SHOW DATABASES;
Md. Shahariar Hossen
  • 1,367
  • 10
  • 11
-1

To import environment to containers you can use env_file: in your docker-compose.yaml file or you can copy .env file to the container and then read with extended libraries.

Python project

You can use the python-dotenv package:

pip install python-dotenv

Then in code:

import os
from dotenv import load_dotenv

    load_dotenv()
    SECRET_KEY = os.getenv("MY_SECRET")

Go project

github.com/joho/godotenv package:

go get github.com/joho/godotenv

In your code:

package main

import (
    "github.com/joho/godotenv"
    "log"
    "os"
)

func main() {
  err := godotenv.Load()
  if err != nil {
    log.Fatal("Error loading .env file")
  }

  secretKey := os.Getenv("MY_SECRET")
}
Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Oraz
  • 31
  • 1
  • 7
-1

Ex:- Suppose You have a use case to start MySQL database container so you need to pass following variables

docker run -dit --name db1 -e MYSQL_ROOT_PASSWORD=root -e MYSQL_DATABASE=mydb -e MYSQL_USER=jack -e MYSQL_PASSWORD=redhat mysql:5.7
helvete
  • 2,455
  • 13
  • 33
  • 37
  • Please include a link to the docker documentation where -e is explained. The example is good, but it would go better with a syntax explanation. – Igor Jan 27 '23 at 03:20