0

Problem I am facing:

  • I have a docker-compose setup on my server
  • I want to build docker images for my projects (I use drone.io) and store them on said server
  • I want to create containers using my private images in my compose setup

What I did:

I did not want to push my private docker image to DockerHub so I decided to create a registry container in my docker-compose setup (CI pushes docker images there automatically). Now, I would like to use that image in the very same docker-compose file as I expected it would be pulled from registry easily.

Relevant part of my docker-compose.yml file for better visualisation:

version: "3.7"

services:

  myimage:
    image: registry.example.com:5000/username/myimage
    depends_on:
      - registry
    networks:
      - traefik_public
    labels:
      // traefik labels

  registry:
    image: registry:2
    networks:
      - traefik_public
    ports:
      - '5000:5000'
    labels:
      // traefik labels giving this container 'registry.example.com' domain

Instead, I immediately realized I broke my setup as calling:

docker-compose up -d

produced the following output:

Network traefik_public is external, skipping
Pulling myimage (registry.example.com:5000/username/myimage:)...
ERROR: Get https://registry.example.com:5000/v2/: dial tcp MY-IP:5000: connect: connection refused

That is understandable, registry container is not running and therefore could not provide required image to create & run 'myimage'.

Putting 'depends_on' to the 'myimage' definition does not help either, as I suppose it is a "runtime" delay mechanism and does not affect the pulling of image.

Workaround:

  1. port-forwarded MY-IP:5000 to machine_running_docker:5000
  2. comment out 'myimage'
  3. docker-compose up <- gets registry up and running
  4. remove comments around 'myimage'
  5. docker-compose up -d myimage
  6. profit

The above steps work, but unfortunately are unacceptable for me as they completely mitigate any value of having all the containers in a single file and make starting/stopping of containers really inconvenient.

How to tackle this? How and where should I store my docker images so that I am able to use them in my compose setup? Is it doable using private registry somehow or not?

  • A registry is a running service. If I understand your issue, you want to pull from a non-running or inaccessible registry. That will not work. The registry must be running to pull images from it. – gview Dec 09 '20 at 16:07
  • Sounds like an [XY problem](https://meta.stackexchange.com/questions/66377/what-is-the-xy-problem). What is the problem you are actually trying to solve for which you considered this as a probable solution? – Shashank V Dec 09 '20 at 16:07
  • @gview I know, I was trying to somehow delay pulling of images until the container serving them is ready – Filip Gdovin Dec 09 '20 at 16:51
  • @ShashankV thank you, you made me rethink my original question, I edited the whole stuff to highlight the original issue I am trying to solve – Filip Gdovin Dec 09 '20 at 16:52
  • Are you trying to bundle the registry and your application together into a single docker-compose file? You say "CI pushes docker images there automatically" - where does the CI push to? Where are you trying to deploy your (application + registry) combo? – Shashank V Dec 09 '20 at 17:07
  • @ShashankV yes, currently I push my docker image to my registry, but that creates the issue where I cannot pull my image unless the registry container is running so I cannot run them as a whole. The ultimate goal is just to be able to use my docker image in my docker-compose. The registry was just my shot at achieving it. – Filip Gdovin Dec 09 '20 at 17:15
  • If I understand correctly what you are trying, you clearly have a circular dependency. As part of of your application's build pipeline, you need to push to a registry - and you want to deploy that registry along with your application. Any case, you need to stop treating the registry as part of your application. Treat the registry as an infrastructure component that can be used by multiple applications. Your application is only one of these applications. You should not couple both of them. – Shashank V Dec 09 '20 at 17:20
  • If the actual problem you were trying to solve is that you need to install the application on a location where you can't pull the images from a registry that your build pipeline pushes to, you can export the images to disk using `docker save`, create a tar file of all the images, load the images on the installation premise using `docker load` and then use the docker-compose. As the images are already loaded, there is no need for a registry, You can write a simple script to automate the the docker load and running docker-compose command to make this a single touch installation. – Shashank V Dec 09 '20 at 17:23
  • @ShashankV "Any case, you need to stop treating the registry as part of your application. Treat the registry as an infrastructure component that can be used by multiple applications. Your application is only one of these applications. You should not couple both of them" - so, decoupling the registry container by moving it from my compose file to.. where? Should I have two compose files, then? One for infrastructure (just registry and maybe CI for now) and other for the rest? It is enough to "connect them" using the network? Similarly to this: https://stackoverflow.com/a/38089080/4483607 – Filip Gdovin Dec 09 '20 at 17:40
  • Yes, you can move it to a separate compose file. Although it might work, I don't recommend using same network for both the registry and your application as you are creating another coupling point. What if you want to deploy another application tomorrow re-using the same registry? You do not want to put all the applications in the same docker network. You can expose the registry on a hostPort probably and access it at `hostIP:hostPort` instead. – Shashank V Dec 09 '20 at 17:55

1 Answers1

0

So based on @ShashankV's answers, I put together a nice-working solution:

I split my services into two parts, represented by two docker-compose files. First, so-called admin part, kicks up the essential services, such as traefik, registry, portainer and so on, so that I am able to leverage these in my next part(s).

Second part runs less important services, or services dependent on the first part. This way I am able to re-use my traefik labels, use images from my registry and so on.

NOTE: The services from BOTH parts use the same external docker network so that I can use traefik reverse proxy on every service.

This is more complicated than what @ShashankV proposed and it keeps the services from both compose files tightly coupled (same network, traefik labels). If you don't need this in your use case, you will probably be better off with separate network for your first and second part.

Last but not least, I wrote a custom bash function which allows me to kick the whole stack up or down with single command, while respecting the order of compose files.

#!/bin/bash
# manages separate docker-compose files
function docker-stack() {
  operation=$1
  if [ "$operation" = "up" ]; then
     files=$(ls -1 docker-compose-*.yml| sort -n)
     for file in $files
     do
       docker-compose -f $file up -d
     done
  elif [ "$operation" = "down" ]; then
     files=$(ls -1 docker-compose-*.yml| sort -nr)
     for file in $files
     do
       docker-compose -f $file down
     done
  elif [ "$operation" = "pull" ]; then
     files=$(ls -1 docker-compose-*.yml| sort -n)
     for file in $files
     do
       docker-compose -f $file pull
     done
  else
     echo "Unknown command $operation"
  fi
}