49

I am working on a simple Docker image that has a large number of environment variables. Are you able to import an environment variable file like with docker-compose? I cannot find anything about this in the docker file documentation.

Dockerfile

FROM python:3.6

ENV ENV1 9.3
ENV ENV2 9.3.4
...

ADD . /

RUN pip install -r requirements.txt

CMD [ "python", "./manager.py" ]

I guess a good way to rephrase the question would be: how do you efficiently load multiple environment variables in a Dockerfile? If you are not able to load a file, you would not be able to commit a docker file to GitHub.

Braiam
  • 1
  • 11
  • 47
  • 78
hY8vVpf3tyR57Xib
  • 3,574
  • 8
  • 41
  • 86

4 Answers4

47

Yes, there are a couple of ways you can do this.

Docker Compose

In Docker Compose, you can supply environment variables in the file itself, or point to an external env file:

# docker-compose.yml
version: '2'
services:

  service-name:
    image: service-app
    environment:
    - GREETING=hello
    env_file:
    - .env

Incidentally, one nice feature that is somewhat related is that you can use multiple Compose files, with each subsequent one adding to the other. So if the above were to define a base, you can then do this (e.g. per run-time environment):

# docker-compose-dev.yml
version: '2'
services:

  service-name:
    environment:
    - GREETING=goodbye

You can then run it thus:

docker-compose -f docker-compose.yml -f docker-compose-dev.yml up

Docker only

To do this in Docker only, use your entrypoint or command to run an intermediate script, thus:

#Dockerfile

....

ENTRYPOINT ["sh", "bin/start.sh"]

And then in your start script:

#!/bin/sh

source .env

python /manager.py

I've used this related answer as a helpful reference for myself in the past.

Update on PID 1

To amplify my remark in the comments, if you make your entry point a shell or Python script, it is likely that Unix signals (stop, kill, etc) will not be passed onto your process. This is because that script will become process ID 1, which makes it the parent process of all other processes in the container - in Linux/Unix there is an expectation that this PID will forward signals to its children, but unless you explicitly implement that, it won't happen.

To rectify this, you can install an init system. I use dumb-init from Yelp. This repo also features plenty of detail if you want to understand it a bit better, or simple install instructions if you just want to "install and forget".

halfer
  • 19,824
  • 17
  • 99
  • 186
  • 2
    But the question is if this is possible without docker-compose? It seems like a bit of an overkill if all that I want is this .env file... – hY8vVpf3tyR57Xib Oct 24 '17 at 20:56
  • @hY8vVpf3tyR57Xib: ah, I see what you mean. I wonder, perhaps `RUN source .env`? Or do `CMD ["sh", "start.sh"]` which does that source command prior to starting your Python program. – halfer Oct 24 '17 at 21:00
  • @halfer Actually, I would be interested in your advice on "how to handle unix signals correctly". I do not know what you mean by dumb init system. – Richard Kiefer Oct 31 '19 at 17:38
  • 1
    @RichardKiefer: answer edited, let me know if you have further questions. – halfer Oct 31 '19 at 17:51
14

I really like @halfers approach, but this could also work. docker run takes an optional parameter called --env-file which is super helpful.

So your docker file could look like this.

COPY .env .env

and then in a build script use:

docker build -t my_docker_image . && docker run --env-file .env my_docker_image
Justin Rice
  • 1,111
  • 12
  • 13
  • VOLUME ["/conf.d", "/mnt/logs"] - what does it mean? – Dmitry Grinko Feb 28 '20 at 03:35
  • Ahh good catch! This would be mounting volumes to your docker container. These two volumes would be specifically for logging and monitoring tools like DataDog. – Justin Rice Feb 28 '20 at 17:41
  • Do you want to remove FROM and VOLUME strings to make the answer more readable? – Dmitry Grinko Feb 28 '20 at 17:55
  • 1
    Why do i need `COPY .env .env` ? The referenced --env-file for docker run could have any name, right? E.g. `docker run --env-file .local.env my_docker_image`. Am I missing some point here? – Colin Feb 28 '21 at 17:45
12

There are various options:
https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables--e-env-env-file

docker run -e MYVAR1 --env MYVAR2=foo --env-file ./env.list ubuntu bash

(You can also just reference previously exported variables, see USER below.)

The one answering your question about an .env file is:

cat env.list
# This is a comment
VAR1=value1
VAR2=value2
USER

docker run --env-file env.list ubuntu env | grep VAR
VAR1=value1
VAR2=value2

docker run --env-file env.list ubuntu env | grep USER
USER=denis

You can also load the environment variables from a file. This file should use the syntax variable=value (which sets the variable to the given value) or variable (which takes the value from the local environment), and # for comments.

Regarding the difference between variables needed at (image) build time or (container) runtime and how to combine ENV and ARG for dynamic build arguments you might try this:
ARG or ENV, which one to use in this case?

halfer
  • 19,824
  • 17
  • 99
  • 186
arne
  • 374
  • 1
  • 8
0

If you need environment variables runtime, it's easiest to create a launcher script that sets up the environment with multiple export statements and then launches your process.

If you need them build time, have a look at the ARG and ENV statements. You'll need one per variable.

gogstad
  • 3,607
  • 1
  • 29
  • 32