57

I have a docker-compose-staging.yml file which I am using to define a PHP application. I have defined a data volume container (app) in which my application code lives, and is shared with other containers using volumes_from.

docker-compose-staging.yml:

version: '2'
services:
    nginx:
        build:
            context: ./
            dockerfile: docker/staging/nginx/Dockerfile
        ports:
            - 80:80
        links:
            - php
        volumes_from:
            - app

    php:
        build:
            context: ./
            dockerfile: docker/staging/php/Dockerfile
        expose:
            - 9000
        volumes_from:
            - app

    app:
        build:
            context: ./
            dockerfile: docker/staging/app/Dockerfile
        volumes:
            - /var/www/html
        entrypoint: /bin/bash

This particular docker-compose-staging.yml is used to deploy the application to a cloud provider (DigitalOcean), and the Dockerfile for the app container has COPY commands which copy over folders from the local directory to the volume defined in the config.

docker/staging/app/Dockerfile:

FROM php:7.1-fpm
COPY ./public /var/www/html/public
COPY ./code /var/www/html/code

This works when I first build and deploy the application. The code in my public and code directories are present and correct on the remote server. I deploy using the following command:

docker-compose -f docker-compose-staging.yml up -d

However, next I try adding a file to my local public directory, then run the following command to rebuild the updated code:

docker-compose -f docker-compose-staging.yml build app

The output from this rebuild suggests that the COPY commands were successful:

Building app
Step 1 : FROM php:7.1-fpm
 ---> 6ed35665f88f
Step 2 : COPY ./public /var/www/html/public
 ---> 4df40d48e6a5
Removing intermediate container 7c0fbbb7f8b6
Step 3 : COPY ./code /var/www/html/code
 ---> 643d8745a479
Removing intermediate container cfb4f1a4f208
Successfully built 643d8745a479

I then deploy using:

docker-compose -f docker-compose-staging.yml up -d

With the following output:

Recreating docker_app_1
Recreating docker_php_1
Recreating docker_nginx_1

However when I log into the remote containers, the file changes are not present.

I'm relatively new to Docker so I'm not sure if I've misunderstood any part of this process! Any guidance would be appreciated.

Alex H
  • 759
  • 1
  • 5
  • 10
  • Do you build on a different machine than where you are running your containers? What is the "remote server"? Where do you run the "build" command? – Andreas Jägle Jan 06 '17 at 10:27
  • I have configured my local docker-machine to use a cloud provider using this guide https://docs.docker.com/machine/get-started-cloud/. So my local running docker-machine is using the digitalocean driver, and I'm building locally. This works on first build, but further changes and builds run locally do not show changes on my remote DigitalOcean containers. – Alex H Jan 06 '17 at 10:50
  • The answer to your question is that yes, I am building on a different machine to where the containers are :) – Alex H Jan 06 '17 at 10:56
  • I am not really aware how the combination of compose builds with remote hosts (docker machine) plays together. Could it be that you are running the old version of the image because the new version is only available locally? (just an assumption). Any chance to run `docker images` locally and remote? – Andreas Jägle Jan 06 '17 at 13:51
  • Docker commands such as 'docker images' and 'docker ps -a' seem to return the same whether run locally or remotely, so I think a docker-machine set to a cloud host actually runs the commands remotely, even when you run them in a local terminal. I have noticed then when I rebuild after making a change, then run 'up', it creates a new container with a temporary name based off the rebuilt image. The old container is sticking around and not exiting, which makes me think it's not replacing it correctly? – Alex H Jan 06 '17 at 14:10
  • Did you copy the files (in the app folder) to the remote system beforehand? I wouldn't assume that compose will copy your files to the machine before executing the build. Have a look at docker-compose scp (https://docs.docker.com/machine/reference/scp/) which seems to be the tool to use for provisioning the machines with data. In my opinion building and running should be separated, connected just by using images from a repository (build, publish, pull, run). – Andreas Jägle Jan 06 '17 at 14:26
  • @AlexH Did you find any workaround for your problem? I am trying to do the same thing – Rahul Aug 08 '17 at 13:01

10 Answers10

25

This is because of cache.

Run,

docker-compose build --no-cache

This will rebuild images without using any cache.

And then,

docker-compose -f docker-compose-staging.yml up -d
Harsh Vakharia
  • 2,104
  • 1
  • 23
  • 26
  • 22
    Unfortunately that didn't work. The output from the COPY commands usually signals if the cache is being used, and it wasn't being used in my original post. I've tried to specify the no-cache option but again the files did not update. – Alex H Jan 06 '17 at 09:33
  • @EhudKaldor Oh, I missed that! :/ – Harsh Vakharia Jan 07 '17 at 18:17
  • 11
    This worked for me, however it is painful to do this every time you changed a few i.e css properties. I am probably naive, but is there any way to have a kind of a watchdog that tracks this changes? FYI i am working with golang. – Plaix Jan 13 '17 at 14:31
  • 2
    @Plaix I'm surprised that this isn't the default behaviour. – basickarl Sep 19 '18 at 08:21
  • 5
    Shouldn't the cache recognise that a file has changed and be instructed to build again? Otherwise, how would cache ever change? – Jules Apr 23 '21 at 02:02
  • I ran into this myself and this is the only thing that worked. There is a big difference between running `docker build --no-cache` btw and `docker-compose build --no-cache`. The docker-compose is creating a different image name / tag than the docker which was throwing me off. – james-see Apr 20 '22 at 04:25
  • Only worked for me if first I down the containers: `docker-compose down` – fguillen Mar 13 '23 at 14:12
9

I was struggling with the fact that migrations were not detected nor done. Found this thread and noticed that the root cause was, indeed, files not being updated in the container. The force-recreate solution suggested above solved the problem for me, but I find it cumbersome to have to try to remember when to do it and when not. E.g. Vue related files seem to work just fine but Django related files don't.

So I figured why not try adjusting the Docker file to clean up the previous files before the copy:

RUN rm -rf path/to/your/app
COPY . path/to/your/app

Worked like a charm. Now it's part of the build and all you need is run the docker-compose up -d --build again. Files are up to date and you can run make migrations and migrate against your containers.

TJA
  • 101
  • 3
  • 6
5

I had the same issue because of shared volumes. For me the solution was to remove shared container using this command:

docker volume rm [VOLUME_ID]

Volume id or name you can find in "Mount" section using this command:

docker inspect [CONTAINER_ID]
Bob Satikin
  • 51
  • 1
  • 3
  • 2
    Thank you so much! This was the case for me. Here it is via compose: `docker-compose down --volumes` (https://stackoverflow.com/a/52326805/2251364) – Hritik Jul 15 '21 at 00:18
5

Just leaving this here for when I come back to this page in two weeks.

You may not want to use docker system prune -f in this block.

    docker-compose down --rmi all -v \
    && docker-compose build --no-cache \
    && docker-compose -f docker-compose-staging.yml up -d --force-recreate
Grahame
  • 59
  • 1
  • 2
4

I had similar issue if not same while working on dotnet core application.

What I was trying to do was rebuild my application and get it update my docker image so that I can see my changes reflected in the containerized copy.

So I got going by removing the underlying image generated by docker-compose up using the command to get my changes reflected:

docker rmi *[imageId]*

I believe there should be support for this in docker-compose but this was enough for my need at the moment.

zx485
  • 28,498
  • 28
  • 50
  • 59
Esayas
  • 41
  • 3
  • 3
    Removing images doesn't help, unfortunately. I even removed docker and `/var/lib/docker` folder on Ubuntu 18.04 and anyway I have an old content in my containers!! – 4xy Nov 28 '18 at 14:33
2

None of the above solutions worked for me, but what did finally work was the following steps:

  1. Copy/Move file outside of docker app folder

  2. Delete File you want to update

  3. Rebuild the docker img without updated file

  4. Move copied file back into docker app folder

  5. Rebuild again the docker image

    Now the image will contain the updates to the file.

Community
  • 1
  • 1
1

I'm relatively new to Docker myself and found this thread after experiencing a similar issue with an updated YAML file not seeming to be copied into a rebuilt container, despite having turned off caching.

My build process differs slightly as I use Docker Hub's GitHub integration for automating image builds when new commits to the master branch are made. The build happens on Docker's servers rather than the locally built and pushed container image workflow.

What ended up working for me was to do a docker-compose pull to bring down into my local environment the most up-to-date versions of the containers defined in my .env file. Not sure if the pull command defers from the up command with a --force-recreate flag set, but I figured I'd share anyway in case it might help someone.

I'd also note that this process allowed me to turn auto-caching back on because the edited file was actually being detected by the Docker build process. I just wasn't seeing it because I was still running docker-compose up on outdated image versions locally.

0

I am not sure it is caching, because (a) it is usually noted in the build output, whether cache was used or not and (b) 'build' should sense the changed content in your directory and nullify the cache.

I would try to bring up the container on the same machine used to build it to see if that is updated or not. if it is, the changed image is not propagated. I do not see any version used in your files (build -t XXXX:0.1 or build -t XXXX:latest) so it might be that your staging machine uses a stale image. Or, are you pushing the new image so the staging server will pull it from somewhere?

Ehud Kaldor
  • 753
  • 1
  • 7
  • 20
  • The newly created image does contain the changes, and has a new IMAGE ID. Furthermore, the updated container references the new IMAGE ID when I do a 'docker inspect'. However, the container does not contain the changes. Which leads me to believe that something is going on with the original /var/www/html folder having some kind of precedence over the updated version in the image? – Alex H Jan 06 '17 at 17:36
  • you mean you run the container on the same machine that built it, the image is updated but the container is still stale? did you try giving it an explicit version number? – Ehud Kaldor Jan 06 '17 at 17:48
  • also, i've noticed that Docker does not tag your latest build with 'latest' automatically in some cases. after you build, trying explicitly tagging: docker build -t XXXX:0.1 . docker tag XXXX:0.1 XXXX:latest – Ehud Kaldor Jan 06 '17 at 17:49
0

As this other answer confirms, the image gets correctly updated, but the original container was still running/created and thus it is using the old COPY contents

debuti
  • 623
  • 2
  • 10
  • 20
-5

You are trying to update an existing volume with the contents from a new image, that does not work.

https://docs.docker.com/engine/tutorials/dockervolumes/#/data-volumes

States:

Changes to a data volume will not be included when you update an image.

terbolous
  • 173
  • 5