I have a few containers that are expected to run together. Do I need to create a tiny github repository for each (seems wasteful and inconvenient), or can I use the same github repository as a source for an automated build of multiple containers?
3 Answers
If the containers are expected to be run together then use docker compose to build and run them. The Docker file associated with each container can then be kept in a sub directory.
Example
├── docker-compose.yml
├── one
│ └── Dockerfile
└── two
└── Dockerfile
docker-compose.yml
web1:
build: one
ports:
- 8080
web2:
build: two
ports:
- 8080

- 76,015
- 10
- 139
- 185
-
1Is this docker-compose.yml interpreted by dockerhub automated builds? – Avi Kivity Oct 29 '15 at 13:35
-
@AviKivity Wouldn't this depend on your build server's workflow? Sorry haven't considered this use case – Mark O'Connor Oct 29 '15 at 13:55
-
I'm using dockerhub's [automated builds](https://docs.docker.com/docker-hub/builds/). – Avi Kivity Oct 30 '15 at 14:15
-
What if the containers are not expected to be run together but are meant to be run individually, separate from each other? – user5359531 Mar 13 '18 at 00:31
-
@user5359531 The question stated the containers are expected to be run together. I'm a little unclear, do you have an example that outlines your alternative configuration? – Mark O'Connor Apr 06 '18 at 16:02
-
Using multiple containers sequentially, independent of each other, is a standard use-case for data analysis workflows, though I guess its not covered by the original question. There is a repo designed around it [here](https://github.com/NYU-Molecular-Pathology/NGS580-nf/tree/master/containers). I ended up just using tags for each container on the Dockerhub listing [here](https://hub.docker.com/r/stevekm/ngs580-nf/tags/) – user5359531 Apr 09 '18 at 17:17
As per my comment, I needed to set up "automated builds for multiple containers". Unlike the original poster, my use case involved containers that did not need to be used together, but instead were meant to be used independently of each other.
The solution I settled on was to set up subdirectories for each container in the project, and then set up automated builds on Dockerhub to track each subdirectory's Dockerfile with a different tag. So, my git repo (hosted on GitHub) contains something like this:
$ tree containers/
containers/
|-- Makefile
|-- R-3.2.3
| |-- Dockerfile
| `-- install.R
|-- README.md
|-- base
| `-- Dockerfile
|-- bedtools-2.26.0
| `-- Dockerfile
|-- bwa-0.7.17
| `-- Dockerfile
And my Dockerhub account tracks the entire repo and builds containers such as
username/project:R-3.2.3
username/project:bedtools-2.26.0
username/project:bwa-0.7.17
Caveats:
all containers are tracked on the
master
branch of the repo, so any change to that branch triggers rebuilds for all containersthe container are built in a random order, so if you have a hierarchy of containers (e.g. custom base layer with intermediary layers) then there is a good chance that changes will not successfully propagate through the whole chain of images and will require triggering manual rebuilds from the Dockerhub website.
Also worth noting that Dockerhub is extremely slow to build, so if you need to push out changes quickly this might not be a viable method.

- 3,217
- 6
- 30
- 55
You can use the same repo and different branches, in order to isolate the history of commits for each container.
Each one can be composed of a Dockerfile, and possibly other resources that the Dockerfile needs to COPY or ADD to the built image.
Since Git 2.5, you can clone that repo and checkout its branches in different folders. See "Multiple working directories with Git?".