2

Consider a project that contains two images that interact. Good practice seems to be to structure the project such that they are in separate directories, each containing a Dockerfile, and a docker-compose at the top level:

project_package/
├── docker-compose.yml
├── image1/
│   ├───── __init__.py
│   ├───── Dockerfile1
│   ├───── image1_1.py
│   └───── image1_2.py
│
└── image2/
    ├───── __init__.py
    ├───── Dockerfile2
    ├───── image2_1.py
    └───── image2_2.py

Now suppose that there is some common code that both images depend upon (in Python parlance - there's a module that they both import from). I see a few approaches here:

  1. Duplicate that code into the directories image1 and image2, and build directly. Undesirable because then I need to update in two places when I change common/.
project_package/
├── docker-compose.yml
├── image1/
│   ├───── common/
│   │      ├───── common1.py
│   │      └───── common2.py
│   ├───── __init__.py
│   ├───── Dockerfile1
│   ...
│
└── image2/
    ├───── common/
    │      ├───── common1.py
    │      └───── common2.py
    ├───── __init__.py
    ├───── Dockerfile2
    ...
  1. Have three top-level directories - image1, image2, and common. Have Dockerfile1 and Dockerfile2 copy files from the common directory into the image. Doesn't work as-imagined - Docs say that "The path must be inside the context of the build", though there might be a way to make this work?
project_package/
├── docker-compose.yml
├── image1/
│   ├───── __init__.py
│   ├───── Dockerfile1
│   ...
│
├── image2/
│   ├───── __init__.py
│   ├───── Dockerfile2
│   ...
│
└── common/
    ├───── common1.py
    └───── common2.py

# image1/Dockerfile1
FROM python:3.8-slim-buster
RUN mkdir common
COPY ../common/ common
...
  1. Three directories as above, but place the Dockerfiles in the root directory rather than in the per-image directories so that the COPY instructions can "see" the common/ directory. Contravenes the apparent best practice of colocating the Dockerfiles in relevant directories, but perhaps good otherwise?
project_package/
├── docker-compose.yml
├── Dockerfile1
├── Dockerfile2
├── image1/
│   ├───── __init__.py
│   ├───── image1_1.py
│   ...
│
├── image2/
│   ├───── __init__.py
│   ├───── image2_1.py
│   ...
│
└── common/
    ├───── common1.py
    └───── common2.py
  1. Use a multi-stage build, and reference the built common image in the main image Dockerfiles:
<Same layout as in 2>

# .github/workflows/main.yml
...
steps:
  - name: Build and push Common
    id: docker_build_common
    uses: docker/build-push-action@v2
    with:
      context: common/
      file: common/Dockerfile
      push: true
      tags: ${{ secrets.DOCKER_HUB_USERNAME }}/common-package:latest
...

# image1/Dockerfile
FROM python:3.8-slim-buster
RUN mkdir common
COPY --from=<username>/common-package:latest *.py common/
...
  1. (Overkill option, at least for this level of project) export the common code to a standalone published module, depend on it via standard code-dependency mechanisms (e.g. requirements.txt/pip for Python)

What would be your preferred method? Is there an approach that I'm missing?

scubbo
  • 4,969
  • 7
  • 40
  • 71
  • If you can put any shared files and directories in volumes and map those volumes to both containers. In this way it acts as a shared filesystem, using the right folder paths to map. https://docs.docker.com/storage/volumes/ – F.Igor Jul 21 '21 at 04:22
  • Maybe you could package the common code in a pip package and then `pip install` it in both projects that depend on it. – Hans Kilian Jul 21 '21 at 07:31
  • Hans, that would be option 5, I think, if you're referring to installing from some external reference. If you're talking about `pip install`ing from local files, then we're back to the apparent problem that `COPY` can't reference files outside the context. – scubbo Jul 21 '21 at 15:49

1 Answers1

1

I guess you think option 2 is what you want, you didn't use it just because of next you mentioned:

The path must be inside the context of the build

If above is the truth, then, in fact you could switch the context, not point the folder where Dockerfile lies. Something like next:

$ tree
.
├── common
│   └── common.py
├── docker-compose.yaml
├── image1
│   └── Dockerfile
└── image2
    └── Dockerfile

3 directories, 4 files

image1/Dockerfile:

FROM python:3
COPY common common
RUN ls common

image2/Dockerfile:

FROM python:alpine
COPY common common
RUN ls common

docker-compose.yaml:

version: '3'
services:
  app1:
    build:
      context: .
      dockerfile: image1/Dockerfile
    tty: true
    stdin_open: true
  app2:
    build:
      context: .
      dockerfile: image2/Dockerfile
    tty: true
    stdin_open: true

Then, execute it:

$ ls
common  docker-compose.yaml  image1  image2
$ docker-compose build --no-cache
Building app1
Step 1/3 : FROM python:3
 ---> 5b3b4504ff1f
Step 2/3 : COPY common common
 ---> 17274c6dfa45
Step 3/3 : RUN ls common
 ---> Running in d9f4b326e0b7
common.py
Removing intermediate container d9f4b326e0b7
 ---> af605b7b3e1e
Successfully built af605b7b3e1e
Successfully tagged 20210721_app1:latest
Building app2
Step 1/3 : FROM python:alpine
 ---> 56302acacaa7
Step 2/3 : COPY common common
 ---> cde0c866beff
Step 3/3 : RUN ls common
 ---> Running in 7b7264d8ab9e
common.py
Removing intermediate container 7b7264d8ab9e
 ---> 2835fe4d9c0f
Successfully built 2835fe4d9c0f
Successfully tagged 20210721_app2:latest

You can see now the common.py in both docker images, meanwhile, your Dockerfile still in different sub folders.

Additional, if you directly use docker build ..., above equal to next:

$ ls
common  docker-compose.yaml  image1  image2
$ docker build -t abc:1 . -f image1/Dockerfile --no-cache
Sending build context to Docker daemon  6.144kB
Step 1/3 : FROM python:3
 ---> 5b3b4504ff1f
Step 2/3 : COPY common common
 ---> 4641d7ca2a98
Step 3/3 : RUN ls common
 ---> Running in 9173c56335c9
common.py
Removing intermediate container 9173c56335c9
 ---> 83ff4c9737c2
Successfully built 83ff4c9737c2
Successfully tagged abc:1

Here, you execute docker build in project_package folder, and specify context as ., then your dockerfile definitely could find the common. The magic is you could use -f to specify the path of Dockerfile. Another word, build context & Dockerfile no need to be the same folder.

atline
  • 28,355
  • 16
  • 77
  • 113
  • Also see the canonical question [How to include files outside of Docker's build context?](https://stackoverflow.com/questions/27068596/how-to-include-files-outside-of-dockers-build-context). – David Maze Jul 21 '21 at 11:23
  • Hmm - thanks, atline! That certainly works. I'm surprised to see that this is the canonical answer, since I would have assumed that it was convention for Dockerfiles to implicitly expect their context to be the directory that directly contains the Dockerfile. I'm very new to Docker and still learning the conventions, though - thanks for the guidance! – scubbo Jul 21 '21 at 15:55