1

Software used:

  • Django 2.2.7
  • Docker 19.03.4 community
  • OS = Ubuntu 18.04

I come from a background of vagrant where have previously set up an environment where I basically have a virtualenv lookalike but inside of a virtual machine. That is to say; I booted up the vagrant machine and using provisioning I installed all the required packages and requirements. Then on my host machine I had installed Eclipse and have the django project located. The last step for me was to bind these together using shared resources. This way I had ensured to always have up to date code.

For those unfamiliar to Django; the runserver command is by default set to listen to code changes and "restarts" (not really) on every change immediately reflecting the change in the browser.

Inside of the virtual machine I would then run the "runserver" command and thus being able to develop in such a way.

Now using docker (I have to tie some software together and ultimately deploy using docker) I try to somewhat replicate this situation. I got my django docker up and running using a simple Dockerfile based off of python-3.7.5-stretch. The container runs fine as does django itself, however the code is now built in (for lack of a better word). This is of course due to my copy command inside the file:

FROM python:3.7.5-stretch

ADD requirements.txt requirements.txt
RUN pip install -r requirements.txt

RUN mkdir /app
WORKDIR /app

ADD . /app/

ENV PYTHONUNBUFFERED 1
ENV LANG C.UTF-8
ENV DEBIAN_FRONTEND=noninteractive

# install environment dependencies
RUN pip3 install --upgrade pip

EXPOSE 8000
EXPOSE 8280

CMD ["python", "manage.py", "makemigrations"]
CMD ["python", "manage.py", "migrate"]
CMD ["python", "/app/manage.py", "runserver", "0.0.0.0:8000"]

This means in order to update the code on every change I need to build the image and rerun the server. Given that I do this multiple times per minute it seems like a huge overkill.

I know about volumes, but it seems they are mounted on the container level, not on it's image level. Effectively this would mean that all commands would be run after initializing manually. The container will have stopped before this time though.

Long story short: What can I do to emulate the vagrant-like situation where code will be automatically read by the container (or pushed) if at all possible?

TLDR: I want a situation where I can update django code in docker realtime without continous rebuilding of the image. Any options?

JustLudo
  • 1,690
  • 12
  • 29
  • 1
    Try using the `--mount` option as described in [this answer](https://stackoverflow.com/questions/23439126/how-to-mount-a-host-directory-in-a-docker-container). – ikkuh Nov 07 '19 at 14:57
  • @ikkuh: This seems to do the trick. I had to create a VOLUME in the Dockerfile, which I could then mount on docker run.Then I created the named volume, symlinked the _data folder with my own development repository and ran the server. A bit convoluted, but it seems to work. Can you make your remark into an answer so I can mark it as correct? – JustLudo Nov 07 '19 at 15:43

1 Answers1

2

My link in the comment was wrong. I meant to link this answer. The given solution might be a bit easier than yours with volumes.

If you have an image and start it roughly by the following command:

$ docker run --rm -it <image_name>

Using the --mount option you can mount the current directory to your /app folder in your container as follows:

$ docker run --rm -it --mount src="$(pwd)",target=/app,type=bind <image_name>

File changes should now restart your Django server in the container.

ikkuh
  • 4,473
  • 3
  • 24
  • 39
  • Marked this as the correct answer because it works flawlessly. I am able to use this same technique for other software as well, which is usefull. – JustLudo Nov 08 '19 at 09:46
  • There is no support for this in Windows yet, please correct me if I am wrong. A lot of issues are raised around this. – JimShapedCoding Jun 11 '21 at 17:11