3

I'm have Docker 2.0/Python 3.7 application, which I load into a docker container, along with its accompanying web and database images (below is the docker-compose.yml file) ...

version: '3'

services:
  mysql:
    restart: always
    image: mysql:5.7
    environment:
      MYSQL_DATABASE: 'maps_data'
      # So you don't have to use root, but you can if you like
      MYSQL_USER: 'chicommons'
      # You can use whatever password you like
      MYSQL_PASSWORD: 'password'
      # Password for root access
      MYSQL_ROOT_PASSWORD: 'password'
    ports:
      - "3406:3406"
    volumes:
      - my-db:/var/lib/mysql

  web:
    restart: always
    build: ./web
    ports:           # to access the container from outside
      - "8000:8000"
    env_file: .env
    environment:
      DEBUG: 'true'
    command: /usr/local/bin/gunicorn maps.wsgi:application -w 2 -b :8000
    depends_on:
      - mysql

  apache:
    restart: always
    build: ./apache/
    ports:
      - "9090:80"
    links:
      - web:web

volumes:
  my-db:

Here is the web/Dockerfile that controls the Django portion of the stack ...

FROM python:3.7-slim

RUN apt-get update && apt-get install

RUN apt-get install -y libmariadb-dev-compat libmariadb-dev
RUN apt-get update \
    && apt-get install -y --no-install-recommends gcc \
    && rm -rf /var/lib/apt/lists/*

RUN python -m pip install --upgrade pip
RUN mkdir -p /app/

WORKDIR /app/

COPY requirements.txt requirements.txt
RUN python -m pip install -r requirements.txt

COPY entrypoint.sh /app/
COPY . /app/
RUN ["chmod", "+x", "/app/entrypoint.sh"]

ENTRYPOINT ["/app/entrypoint.sh"]

My question is, is there a way I can configure things such that when I make a local change to a Python file, the change is immediately reflected in my running Docker instance? Right now, if I make a change, I have to run

docker-compose down --rmi all
docker-compose up

As you can imagine, this is a lengthy process, especially if just changing one file.

Dave
  • 15,639
  • 133
  • 442
  • 830
  • I'd recommend an ordinary Python virtual environment and not using Docker here. You can launch the database in a container and that's helpful, but in this case you actively don't want the filesystem isolation that Docker brings. – David Maze Mar 05 '20 at 01:03
  • @DavidMaze, do you mean having my docker container with only MySql and Apache, and then launching the Django/python server normally on my local machine? – Dave Mar 05 '20 at 02:01
  • Probably just the MySQL container, even, but yes. – David Maze Mar 05 '20 at 11:17
  • Gotcha. I'm going to continue to pursue the docker option, so I'm going to leave this question open, but if there is no solution for that, I'll go with what you suggest. – Dave Mar 05 '20 at 15:07
  • Do you need update your application in production environment when a push in performed in your python code? – JRichardsz Mar 12 '20 at 19:12
  • @JRicahrdsz, no, don't worry about production. As long as this works the way I want locally, all is well. – Dave Mar 14 '20 at 16:56

4 Answers4

1

Mount the sources on your local system to sources in the docker via volume and setup web server inside with live reload on changes in file, with gunicorn server it is --reload parameter.

Andriy Ivaneyko
  • 20,639
  • 6
  • 60
  • 82
  • I don't suppose you would be able to list the changes I would need to make to my docker files to achieve what you're saying? – Dave Mar 05 '20 at 02:00
0

You can create a volume. With a volume, you can share a folder between your machine and a docker container. This creates a volume in a folder called volumes/web at the same level of your docker-compose and in /opt/app inside your container.

With this, you can work in local and have the changes reflected in your docker container. Depending of your entrypoint, you could stop your application and relaunch it without restart your container or not.

In gunicorn you can use --reload option to reload your server automatically when there are changes.

  web:
    restart: always
    build: ./web
    ports:           # to access the container from outside
      - "8000:8000"
    env_file: .env
    environment:
      DEBUG: 'true'
    command: /usr/local/bin/gunicorn maps.wsgi:application -w 2 -b :8000
    depends_on:
      - mysql

    volumes:
      - ./volumes/web:/opt/app
Ángel Igualada
  • 891
  • 9
  • 13
  • Did you mean changing the "command" in my "web" section to "command: /usr/local/bin/gunicorn maps.wsgi:application --reload -w 2 -b :8000"? Your answer doesn't specify '--reload' within the command . Anyway, I tried adding that in, but changing python files on my local file system do not get reflected immediately in my docker container. – Dave Mar 05 '20 at 01:59
0

You can achieve it by changing the mount path. Mount volume directly to pod and on any changes, you can reload the changes in the Django server which will immediately apply all changes.

Docker compose file :

version: '3'

services:
  mysql:
    restart: always
    image: mysql:5.7
    environment:
      MYSQL_DATABASE: 'maps_data'
      # So you don't have to use root, but you can if you like
      MYSQL_USER: 'chicommons'
      # You can use whatever password you like
      MYSQL_PASSWORD: 'password'
      # Password for root access
      MYSQL_ROOT_PASSWORD: 'password'
    ports:
      - "3406:3406"
    volumes:
      - my-db:/var/lib/mysql

  web:
    restart: always
    build: ./web
    ports:           # to access the container from outside
      - "8000:8000"
    env_file: .env
    environment:
      DEBUG: 'true'
    command: /usr/local/bin/gunicorn maps.wsgi:application -w 2 -b :8000 --reload
    volumes:
      - .:/app/           #Please update path here
    depends_on:
      - mysql

  apache:
    restart: always
    build: ./apache/
    ports:
      - "9090:80"
    links:
      - web:web

volumes:
  my-db:

You can also check similar questions :

Why does docker-compose build not reflect my django code changes?

Auto-reloading of code changes with Django development in Docker with Gunicorn

Harsh Manvar
  • 27,020
  • 6
  • 48
  • 102
  • You had a comment "change your path here" -- what am I changing this to? When I insert your lines as is and try and bring up my containers (by running "docker-compose up"), I get the error, 'ERROR: for maps_web_1 Cannot start service web: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"/app/entrypoint.sh\": stat /app/entrypoint.sh: no such file or directory": unknown' – Dave Mar 13 '20 at 20:55
0

I would take a different approach than the ones described here in the other answers.

Firstly, Django already provides you with a runserver that starts a lightweight development server that automatically reloads Python code on each request. In most cases, this is the easiest way to run a development server with autoreloading and all of that good stuff. You can read more above runserver by following the link provided. Note: You have to make sure that you have the WSGI application object specified by the WSGI_APPLICATION setting in your config file.

You can run Django by using the runserver command simply by running a bash command like python ./manage.py runserver 0.0.0.0:8000. Before running the runserver command, I generally like to run a migrate command as well to avoid any errors (I have just learnt this from experience).

In you docker-compose these commands would look something as follows:

...
  web:
    ...
    command: >
      bash -c "python ./manage.py migrate &&
               ./manage.py runserver 0.0.0.0:8000" # Simple bash command to run migrate followed by runserver
    ...

The code above would have been perfect on its own, but when running runserver in docker-compose, you will generally have to wait for your database to finish initializing so Django can use it (discussed later in comments). You can do this by creating a simple file in your web directory called wait_for_db.py and have the following code in that:

import os
import logging
from time import time, sleep
import MySQLdb

check_timeout = os.getenv("DB_CHECK_TIMEOUT", 30)
check_interval = os.getenv("DB_CHECK_INTERVAL", 1)
interval_unit = "second" if check_interval == 1 else "seconds"
config = {
    "dbname": os.getenv("MYSQL_DATABASE", "maps_data"),
    "user": os.getenv("MYSQL_USER", "chicommons"),
    "password": os.getenv("MYSQL_PASSWORD", "password"),
    "host": os.getenv("DATABASE_URL", "mysql")
}

start_time = time()
logger = logging.getLogger()
logger.setLevel(logging.INFO)
logger.addHandler(logging.StreamHandler())


def db_isready(host, user, password, dbname):
    while time() - start_time < check_timeout:
        try:
            conn = MySQLdb.connect(**vars())
            logger.info("Database is ready! ✨")
            conn.close()
            return True
        except Exception:
            logger.info(f"Database isn't ready. Waiting for {check_interval} {interval_unit}...")
            sleep(check_interval)

    logger.error(f"We could not connect to the database within {check_timeout} seconds.")
    return False


db_isready(**config)

To run the above file in your docker-compose, you will need to update the command section as below:

...
  web:
    ...
    command: >
      bash -c "python wait_for_db.py &&
               ./manage.py migrate &&
               ./manage.py runserver 0.0.0.0:8000" # Again, note that I am simply executing a python file in a simple bash command followed by the same old migrate and runserver commands
    ....

Finally, the last step (and probably the most important one), you will need to make sure that you are making use of volumes in Docker, which others have explained perfectly in their answers here, but just for clarity here is the code for that in your docker-compose file:

...
  web:
    ...
    volumes:
    - ./web/:/app 
    # The part to the left side of the colon is the location in your computer relative to the current file, and to the left side is the location within the docker container.
    # It is app because remember in your Dockerfile, you copy everything to /app directory. ;)
    ...

One final note, runserver should not be executed in production. Generally in production, you would not need live-reloading, so your earlier configuration for WSGI is the way to go in production. You will need to separate the production and development docker-compose files.

Afraz Hussain
  • 2,183
  • 1
  • 11
  • 17
  • This answer is actually a very simple approach if you copy all of my code and try it out first. I just explained the answer to clarify all the small doubts that you may have. If you are not sure why we need to wait for the database, it is because we need to avoid errors where your django runs before your database runs, please read the question and answer here: https://stackoverflow.com/questions/53129271/can-t-connect-to-mysql-server-on-db-django-restframework-with-mysql-in-doc Please let me know if you have any questions and I will try my best to answer them here. – Afraz Hussain Mar 13 '20 at 23:27
  • You will also need to make sure that you have `MySQL-python` in your `requirements.txt` – Afraz Hussain Mar 13 '20 at 23:31
  • Thanks for your well thought out response. Is it not possible then to create a Django volume that points to a volume on my local machine? I'm trying to eliminate the need for creating extra files (e.g. wait_for_db.py) but open to that if this is the only way. – Dave Mar 14 '20 at 16:55
  • I did give your solution a try though. It resulted in a "ERROR: for maps_web_1 Cannot start service web: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"/app/entrypoint.sh\": permission denied": unknown" error when I ran "docker-compose up". – Dave Mar 14 '20 at 18:23
  • Oh that's simple, you will just have to make sure that Docker has the permission to your system drive. If I assume you are using Docker Desktop for Windows, you can do this by going to the Settings and go to Shared Drives and selecting your drive there. If its some other OS, you can simply search on Google about that – Afraz Hussain Mar 14 '20 at 18:33
  • Also the `wait_for_db.py` is actually not a `runserver` issue but more of an issue with how `docker-compose` might start the Django server first without the database being fully initialized. You can read about the `docker-compose` startup-order here: https://docs.docker.com/compose/startup-order/ It would be recommended even in your previous setup to avoid errors like the one I linked earlier. – Afraz Hussain Mar 14 '20 at 18:36
  • Hi, I'm actually using Mac (High Sierra) – Dave Mar 14 '20 at 21:11
  • Can you try running `chmod 777 {path_to your_directory}` on your Mac and let me know if it works? For example, `chmod 777 ~/dev/app/`. Just need to change the permissions of that directory to let Docker access it. – Afraz Hussain Mar 14 '20 at 21:18
  • Let us [continue this discussion in chat](https://chat.stackoverflow.com/rooms/209646/discussion-between-afraz-hussain-and-dave). – Afraz Hussain Mar 14 '20 at 21:32