0

I have a multi container Django app. One Container is the database, another one the main webapp with Django installed for handling the front- and backend. I want to add a third container which provides the main functionality/tool we want to offer via the webapp. It has some complex dependencies, which is why I would like to have it as a seperate container as well. It's functionality is wrapped as a CLI tool and currently we build the image and run it as needed passing the arguments for the CLI tool.

Currently, this is the docker-compose.yml file:

version: '3'

services:

  db:
    image: mysql:8.0.30
    environment:
      - MYSQL_DATABASE=${MYSQL_DATABASE}
      - MYSQL_USER=${MYSQL_USER}
      - MYSQL_PASSWORD=${MYSQL_PASSWORD}
      - MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
      - TZ=${TZ}
    volumes:
      - db:/var/lib/mysql
      - db-logs:/var/log/mysql
    networks:
      - net
    restart: unless-stopped
    command: --default-authentication-plugin=mysql_native_password

  app:
    build:
      context: .
      dockerfile: ./Dockerfile.webapp
    environment:
      - MYSQL_NAME=${MYSQL_DATABASE}
      - MYSQL_USER=${MYSQL_USER}
      - MYSQL_PASSWORD=${MYSQL_PASSWORD}
    ports:
      - "8000:8000"
    networks:
      - net
    volumes:
      - ./app/webapp:/app
      - data:/data
    depends_on:
      - db
    restart: unless-stopped
    command: >
      sh -c "python manage.py runserver 0.0.0.0:8000"

  tool:
    build:
      context: .
      dockerfile: ./Dockerfile.tool
    volumes:
      - data:/data

networks:
  net:
    driver: bridge

volumes:
  db:
  db-logs:
  data:

In the end, the user should be able to set the parameters via the Web-UI and run the tool container. Multiple processes should be managed by a job scheduler. I hoped that running the container within a multi-container app should be straightforward, but as far as I now know it is only possible through mounting the docker socket which should be avoided regarding security issues.

So my question is: what are the possiblites to achieve my desired goal? Things I considered:

  • multi-stage container: main purpose is to reduce file size, but is there a hack to use the cli-tool along with its built environment in the latest image of the multi-stage container?
  • Api: build an Api for the tool. Other containers can communicate via the docker network. Seems to be cumbersome
  • The service "app" (the main django app) is built on top of the official python image which I would like to keep this way. Nevertheless there is the possibility to build one large image based on Ubuntu which includes the tool along with its dependendencies and the main django app. This will probably heavily increase sizes and maybe turn into dependency issues.

Has anybody run into similar issues? Which direction would you point me to? I'm also looking for some buzzwords that speed up my research.

Sunderam Dubey
  • 1
  • 11
  • 20
  • 40
Wengo
  • 1
  • 1
    Does this answer your question? [How to execute command from one docker container to another](https://stackoverflow.com/questions/59035543/how-to-execute-command-from-one-docker-container-to-another) – Abdul Aziz Barkat Aug 11 '22 at 12:22
  • The option with ssh is definitley a way i want to try. Thanks! I'm still open for other suggestions. – Wengo Aug 11 '22 at 12:52

1 Answers1

0

You should build both parts into a single unified image, and then you can use the Python subprocess module as normal to invoke the tool.

The standard Docker Hub python image is already built on Debian, which is very closely related to Ubuntu. So you should be able to do something like

FROM python:3.10

# Install OS-level dependencies for both the main application and
# the support tool
RUN apt-get update \
 && DEBIAN_FRONTEND=noninteractive \
    apt-get install --no-install-recommends --assume-yes \
      another-dependency \
      some-dependency \
      third-dependency

# Install the support tool
ADD http://repository.example.com/the-tool/the-tool /usr/local/bin/the-tool
RUN chmod +x /usr/local/bin/the-tool

# Copy and install Python-level dependencies
WORKDIR /app
COPY requirements.txt ./
RUN pip install -r requirements.txt

# Copy in the main application
COPY ./ ./

# Metadata on how to run the application
EXPORT 8000
# USER someuser
CMD ["./the-app.py"]

You've already noted the key challenges in having the tool in a separate container. You can't normally "run commands" in a container; a container is a wrapper around some single process, and it requires unrestricted root-level access to the host to be able to manipulate the container in any way (including using the docker exec debugging tool). You'd also need unrestricted root-level access to the host to be able to launch a temporary container per request.

Putting some sort of API or job queue around the tool would be the "most Dockery" way to do it, but that can also be significant development effort. In this setup as you've described it, the support tool is mostly an implementation detail of the main process, so you're not really breaking the "one container does one thing" rule by making it available for a normal Unix subprocess invocation inside the same container.

David Maze
  • 130,717
  • 29
  • 175
  • 215