64

I have all my websites' code under /srv in my containers.

My Dockerfile downloads the code using git and makes it part of the image for easier deployment to production.

But then how do I edit the code in development? I thought using volumes was the solution, eg: -v /docker/mycontainer/srv:/srv. But it overwrites the directory in the container. If it's the first time I run it it empties it because there's nothing in the host. So whatever I did in the Dockerfile was gets lost.

There are also directories and files inside /srv/myapp that I want to be shared across the different versions of my app, eg: /srv/myapp/user-uploads. This is a common practice in professional web development.

So what can I do to be able to do all these things?:

  • edit code in /srv in development
  • share /srv/myapp/user-uploads across different versions
  • let Dockerfile download the code. Doing "git clone" or "git pull" outside of Docker would defeat Docker's purpose in my opinion. Besides there are things that I can't run in the host, like the database migrations or other app-specific scripts.

Is there a way to do a reverse volume mount? I mean make the container overwrite the host, instead of the opposite.

I'm thinking one soluiton might be to copy /srv to /srv.deployment-copy before running the container's daemon. And then when I run the daemon check if /srv.deployment-copy exists and copy everything back to /srv. This way I can use /srv as a volume and still be able to deploy code to it with the Dockerfile. I'm already using aliases for all the docker commands so automating this won't be a problem. What do you think?

ChocoDeveloper
  • 14,160
  • 26
  • 79
  • 117
  • 1
    Storing user data in the web app folder isn't common practice at all. It complicates everything with no good reason. –  Mar 14 '16 at 14:16
  • 1
    Yes doing git clone and git pull outside of Docker is totally normal. That is how I do it. The container is just that a container. The app code changes and is kept in a separate repo. Migrations and other app specific commands can easily be run using the exec command which allows you to run commands in a running container. – Kevin Jun 01 '16 at 18:22

5 Answers5

14

I found the best way to edit code in development is install everything as usual (including cloning your app's repository), but move all the code in the container to say /srv/myapp.deploy.dev. Then start the container with a rw volume for /srv/myapp, and a init.d script that cleans that volume and copies the new contents inside like this:

rm -r /srv/myapp/*
rm -r /srv/myapp/.[!.]*
cp -r /srv/myapp.deploy.dev/. /srv/myapp
rm -r /srv/myapp.deploy.dev
ChocoDeveloper
  • 14,160
  • 26
  • 79
  • 117
  • 4
    I'm stumbling onto the same problem, it looks like you've solved it but I'm not clear what you're doing. Could you elaborate your answer with what's being done in your Dockerfile (at image creation time) vs what's being done in some startup scripts (at container creation time)? – user779159 Jul 30 '14 at 18:58
  • 3
    @user779159 I'm sorry I've been away from the site for months. What I'm doing here is moving the code somewhere else before creating the volume, then I create the volume (will be empty if it's the first time, or with old code if it's not), then delete everything in the volume (in case it contains old code), and then I move the new code into it. So now I have my new code sitting in a volume, so I can edit it from outside. – ChocoDeveloper Dec 02 '14 at 01:28
14

There is another way to start container with volume from another container:

Look at https://docs.docker.com/userguide/dockervolumes/
Creating and mounting a Data Volume Container

If you have some persistent data that you want to share between containers, or want to use from non-persistent containers, it's best to create a named Data Volume Container, and then to mount the data from it.

Let's create a new named container with a volume to share.

$ sudo docker run -d -v /dbdata --name dbdata training/postgres echo Data-only container for postgres

You can then use the --volumes-from flag to mount the /dbdata volume in another container.

$ sudo docker run -d --volumes-from dbdata --name db1 training/postgres

And another:

$ sudo docker run -d --volumes-from dbdata --name db2 training/postgres

Another useful function we can perform with volumes is use them for backups, restores or migrations. We do this by using the --volumes-from flag to create a new container that mounts that volume, like so:

$ sudo docker run --volumes-from dbdata -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata

=============

I think you should not use mounting of your host directory to a container. But you can use volumes with all its powers. You can edit files in volumes using other containers with perfect set of your editors and tools. And container this your app will be clean without overhead.

Structure is:
-) Container for app data
docker run -d -v /data --name data
-) Container for app binaries
docker run -d --volumes-from data --name app1
-) Container for editors and utilities for development
docker run -d --volumes-from data --name editor

Mikl
  • 673
  • 9
  • 19
6

Note: you cannot mount container directory to host directory with -v.

I don't think that you need to mangle /srv and /srv.deployment-copy. If you

I think that:

  • You should use volume for persistent/shared data: -v /hostdir/user-uploads:/srv/myapp/user-uploads, or you can use data volume container concept. You can consider this a filesystem backed database that is stored at host (data only container) and container is allowed to use it by -v.

  • You are correct: for production deployment - you can build the image with source code (git clone), you build an image for every release. There should be no need to edit the source code in production.

  • for development environment - you should build the image without source code or you can shadow the source code directory with volume in case of using the same image for deployment/development. Then git clone source code locally and do use volume -v /hostdir/project/src:/srv/project to share source code with container. Preferably you should share the source code read-only (:ro at the end) and any temporary or intermediate files should be stored somewhere else in the container. I have setup scripts (data migration, rebuild some index/cache data files etc.) executed at the container start, before service start. So whenever I feels I need fresh re-init, I just kill the dev container and run it again. Or, I do not stop the old container - I just run another one.

Jiri
  • 16,425
  • 6
  • 52
  • 68
6

I found a nice way of doing this using just git:

CONTAINER=my_container
SYNC_REPO=/tmp/my.git
CODE=/var/www

#create bare repo in container
docker exec $CONTAINER git init --bare $SYNC_REPO

#add executable syncing hook that checks out into code dir in container
printf "#!/bin/sh\nGIT_WORK_TREE=$CODE git checkout -f\n" | \
docker exec -i $CONTAINER bash -c "tee $SYNC_REPO/hooks/post-receive;chmod +x \$_"

#use git-remote-helper to use docker exec instead of ssh for git
git remote add docker "ext::docker exec -i $CONTAINER sh -c %S% $SYNC_REPO"

#push updated local code into docker
git push docker master

Assumes you have a local git with the code. Git needs to be installed in container. Alternatively you could probably use docker run and a data container with a shared volume with git installed.

till
  • 570
  • 1
  • 6
  • 22
2

Assuming git is not the entrypoint of the container, if git is installed in your docker container you can ssh into the container and run the git clone/git pull. Because of the way the volume is shared with the host, changes made from the container to the files will be made to the host as well (really it's the same files).

Here is some explanation of how to quickly ssh into a container.

Christopher Louden
  • 7,540
  • 2
  • 26
  • 29
  • 8
    I can ssh into the container, but doing this (git pulling *after* creating the container) defeats Docker's purpose (or half of it at least) in my opinion. I would like to be able to deploy the image as a self-contained package. Git pulling after deploying the image makes no sense. I will even lose the reproducibility, immutability, ability to easily rollback, etc. Git pull is the kind of thing that might fail for random reasons, and that using docker was supposed to fix. – ChocoDeveloper Apr 03 '14 at 17:54
  • 1
    I agree 100%. I don't have further insight. – Christopher Louden Apr 03 '14 at 19:29
  • I posted my answer, you might find it helpful. – ChocoDeveloper May 30 '14 at 12:35