288

Say I have a trivial container based on the ubuntu:latest. Now there is a security update and ubuntu:latest is updated in the docker repo .

  1. How would I know my local image and its containers are running behind?

  2. Is there some best practice for automatically updating local images and containers to follow the docker repo updates, which in practice would give you the same niceties of having unattended-upgrades running on a conventional ubuntu-machine

eyllanesc
  • 235,170
  • 19
  • 170
  • 241
hbogert
  • 4,198
  • 5
  • 24
  • 38
  • 22
    I'm looking for an answer to this since the beginning of docker. Its even a little more complicated. If I install apache (for instance) and that gets updated, the base-image does not change, since I installed it afterwards. I still would like to have auto-updates for apache. I actually asked in the IRC about this and got "follow upstream and rebuild on updates" as an answer... – Mathias Oct 20 '14 at 10:29
  • 9
    Glad I'm not the only one wondering. It seems development and reproducibility is more important for the docker devs than sensible update mechanisms which we've had for years now. – hbogert Oct 20 '14 at 11:18
  • The problem is, docker is just the technology for the containers. I think it needs some time for an ecosystem to evolve around that. There are other problems docker doesn't address like logging. – Mathias Oct 20 '14 at 12:10
  • 3
    Thanks to everyone who answered. I'm sorry I couldn't split the bounty. Even though there was no final solution to my problem, there was good input from all of you. – Mathias Oct 28 '14 at 12:22
  • 1
    For @Mathias, the solution I just added has a script that will check for security updates for packages installed in the container post-pull. It also has a separate script for checking the base image. – Fmstrat Jun 27 '17 at 22:46
  • Rule of thumb: Don't put something in container that you cannot generate via automation later. – Xaqron Aug 26 '17 at 11:50
  • 1
    from this blog:[Watching Images for Updates](https://anchore.com/blog/watching-images-updates/),I know a good product of Docker CI/CD. But the [anchore-engine open source](https://github.com/anchore/anchore-engine) need to run the server in your VPS. It is still a problem to provite a light anchore server – 我零0七 Apr 10 '20 at 06:12
  • @我零0七 But that will only notify you of updates right? That still means you'd have to pull the new image, rebuild the image and deploy. – hbogert Apr 10 '20 at 11:17
  • In case anyone wants a simple script that can help automate checking for image updates, [dockcheck](https://github.com/foresto/dockcheck) works well with cron. – ʇsәɹoɈ Oct 10 '20 at 02:01
  • I helped write [image-watch.com](https://image-watch.com), which is a hosted notification solution. It doesn't have hooks for rebuilding, but it does provide you with the old and new hashes, so applying them would be a text search-and-replace followed by a commit and push. – Stephen Cleary Nov 21 '21 at 01:40

16 Answers16

165

We use a script which checks if a running container is started with the latest image. We also use upstart init scripts for starting the docker image.

    #!/usr/bin/env bash
    set -e
    BASE_IMAGE="registry"
    REGISTRY="registry.hub.docker.com"
    IMAGE="$REGISTRY/$BASE_IMAGE"
    CID=$(docker ps | grep $IMAGE | awk '{print $1}')
    docker pull $IMAGE

    for im in $CID
    do
        LATEST=`docker inspect --format "{{.Id}}" $IMAGE`
        RUNNING=`docker inspect --format "{{.Image}}" $im`
        NAME=`docker inspect --format '{{.Name}}' $im | sed "s/\///g"`
        echo "Latest:" $LATEST
        echo "Running:" $RUNNING
        if [ "$RUNNING" != "$LATEST" ];then
            echo "upgrading $NAME"
            stop docker-$NAME
            docker rm -f $NAME
            start docker-$NAME
        else
            echo "$NAME up to date"
        fi
    done

And init looks like

docker run -t -i --name $NAME $im /bin/bash
MatthewMartin
  • 32,326
  • 33
  • 105
  • 164
bsuttor
  • 2,716
  • 2
  • 17
  • 10
  • 2
    Thanks a lot for this valuable contribution. This seems like a good way to update the base image. The remaining question is, how do you update an application (like apache) that was installed by the distribution in the dockerfile? Or do you only use ready-made base images that only need your application code (like a website)? – Mathias Oct 25 '14 at 11:28
  • We use packer and puppet to configure our images. Our images are ready to go to production after their creation – bsuttor Oct 27 '14 at 09:05
  • @Mathias, see my edited answer I have a tiny tool [docker-run](https://github.com/iTech-Developer/docker-run) that I am using to update linux (currently debian/ubuntu) packages in all running containers. – iTech Nov 05 '14 at 20:47
  • 3
    If an image has the same name as a container (e.g. `redis`), `LATEST=\`docker inspect --format "{{.Id}}" $IMAGE\`` will get the container info. Add `--type image` to fix this. – Patrick Fisher Jun 04 '16 at 02:40
  • 1
    Thanks for your post. I modified it to wrap the whole thing inside a loop to obtain images from docker: `for IMAGE in $(docker ps --format {{.Image}} -q | sort -u)` – Armand Aug 14 '17 at 19:53
  • I know this is an old post, but how can you start something that you already removed – z-vap Dec 15 '19 at 20:18
  • I know that this is very old, but how do you use this script. Where do you place it? – Michał Marszałek Jul 06 '21 at 22:23
  • In my case, I added this script in /opt/docker_scripts/"project_name"/update_image.sh But you can run it everywhere, but you have to define IMAGE variable in the script – bsuttor Jul 07 '21 at 10:03
30

You can use Watchtower to watch for updates to the image a container is instantiated from and automatically pull the update and restart the container using the updated image. However, that doesn't solve the problem of rebuilding your own custom images when there's a change to the upstream image it's based on. You could view this as a two-part problem: (1) knowing when an upstream image has been updated, and (2) doing the actual image rebuild. (1) can be solved fairly easily, but (2) depends a lot on your local build environment/practices, so it's probably much harder to create a generalized solution for that.

If you're able to use Docker Hub's automated builds, the whole problem can be solved relatively cleanly using the repository links feature, which lets you trigger a rebuild automatically when a linked repository (probably an upstream one) is updated. You can also configure a webhook to notify you when an automated build occurs. If you want an email or SMS notification, you could connect the webhook to IFTTT Maker. I found the IFTTT user interface to be kind of confusing, but you would configure the Docker webhook to post to https://maker.ifttt.com/trigger/`docker_xyz_image_built`/with/key/`your_key`.

If you need to build locally, you can at least solve the problem of getting notifications when an upstream image is updated by creating a dummy repo in Docker Hub linked to your repo(s) of interest. The sole purpose of the dummy repo would be to trigger a webhook when it gets rebuilt (which implies one of its linked repos was updated). If you're able to receive this webhook, you could even use that to trigger a rebuild on your side.

jjlin
  • 4,462
  • 1
  • 30
  • 23
  • 2
    Watchtower uses the docker socket though. From a security perspective that is giving away root access to the host machine. – JoeG Aug 02 '17 at 16:08
  • 1
    Also, Watchtower doesn't seem to be able to update images from private repositories other than Docker Hub. A bummer for us who use Azure. – Thomas Eyde Nov 22 '18 at 08:55
  • 1
    You can use private registries using `REPO_USER` and `REPO_PASS` environment variables. Take a look readme.md from Watchtower for more info: https://github.com/v2tec/watchtower#usage – Alejandro Nortes Jan 21 '19 at 20:49
  • 3
    Word of warning, watchtower is abandoned by its maintainer, and the image in DockerHub isn't even up to date with the one in github. – XanderStrike Mar 06 '19 at 23:51
  • The Watchtower repo seems to have been migrated to [containrrr/watchtower](https://github.com/containrrr/watchtower). And there are some issues with linked automated builds on Dockerhub, as pointed out by [this answer on a similar question](https://stackoverflow.com/a/59327648/1781026). – chrki Apr 26 '20 at 21:41
26

A 'docker way' would be to use docker hub automated builds. The Repository Links feature will rebuild your container when an upstream container is rebuilt, and the Webhooks feature will send you a notification.

It looks like the webhooks are limited to HTTP POST calls. You'd need to set up a service to catch them, or maybe use one of the POST to email services out there.

I haven't looked into it, but the new Docker Universal Control Plane might have a feature for detecting updated containers and re-deploying.

CAB
  • 1,106
  • 9
  • 23
12

One of the ways to do it is to drive this through your CI/CD systems. Once your parent image is built, have something that scans your git repos for images using that parent. If found, you'd then send a pull request to bump to new versions of the image. The pull request, if all tests pass, would be merged and you'd have a new child image based on updated parent. An example of a tool that takes this approach can be found here: https://engineering.salesforce.com/open-sourcing-dockerfile-image-update-6400121c1a75 .

If you don't control your parent image, as would be the case if you are depending on the official ubuntu image, you can write some tooling that detects changes in the parent image tag or checksum(not the same thing, tags are mutable) and invoke children image builds accordingly.

yuranos
  • 8,799
  • 9
  • 56
  • 65
Ma3oxuct
  • 240
  • 1
  • 3
  • 7
  • Wow this is a big hammer, that said: since the time when I asked this question I've also come to realize that the build server is the place to tackle this problem. I'm glad to see some tooling. If you explain your approach in generic concepts (and not your exact tool/implementation) and include it in the answer I'll probably accept it. – hbogert Aug 18 '18 at 20:40
  • Thanks @hbogert I edited the above and also include an idea about what to do if you are dealing with public images – Ma3oxuct Aug 20 '18 at 00:35
10

I had the same issue and thought it can be simply solved by a cron job calling unattended-upgrade daily.

My intention is to have this as an automatic and quick solution to ensure that production container is secure and updated because it can take me sometime to update my images and deploy a new docker image with the latest security updates.

It is also possible to automate the image build and deployment with Github hooks

I've created a basic docker image with that automatically checks and installs security updates daily (can run directly by docker run itech/docker-unattended-upgrade ).

I also came across another different approach to check if the container needs an update.

My complete implementation:

Dockerfile

FROM ubuntu:14.04   

RUN apt-get update \
&& apt-get install -y supervisor unattended-upgrades \
&& rm -rf /var/lib/apt/lists/*

COPY install /install
RUN chmod 755 install
RUN /install

COPY start /start
RUN chmod 755 /start

Helper scripts

install

#!/bin/bash
set -e

cat > /etc/supervisor/conf.d/cron.conf <<EOF
[program:cron]
priority=20
directory=/tmp
command=/usr/sbin/cron -f
user=root
autostart=true
autorestart=true
stdout_logfile=/var/log/supervisor/%(program_name)s.log
stderr_logfile=/var/log/supervisor/%(program_name)s.log
EOF

rm -rf /var/lib/apt/lists/*

ENTRYPOINT ["/start"]

start

#!/bin/bash

set -e

echo "Adding crontab for unattended-upgrade ..."
echo "0 0 * * * root /usr/bin/unattended-upgrade" >> /etc/crontab

# can also use @daily syntax or use /etc/cron.daily

echo "Starting supervisord ..."
exec /usr/bin/supervisord -n -c /etc/supervisor/supervisord.conf

Edit

I developed a small tool docker-run that runs as docker container and can be used to update packages inside all or selected running containers, it can also be used to run any arbitrary commands.

Can be easily tested with the following command:

docker run --rm -v /var/run/docker.sock:/tmp/docker.sock itech/docker-run exec

which by default will execute date command in all running containers and display the results. If you pass update instead of exec it will execute apt-get update followed by apt-get upgrade -y in all running containers

iTech
  • 18,192
  • 4
  • 57
  • 80
  • my reference to unattended-upgrade was only to show the analogy in a non-docker environment. My intent is to solve this the docker way (if that exists oc.) Having an extra process in a container beats the purpose of docker imo. It does patch the problem of the delay between upstream updating their image and you, the user, actually depoying it over your current container. Though this can take up to 1 day with unattended-upgrades as well, so.. Also the github reference is non-satisfactory, because the update mechanism is now heavily dependent on the host OS. – hbogert Oct 24 '14 at 07:46
  • The "docker way" does not prevent you from running other processes on the same container if they are tightly related and won't create scalability bottleneck. And this particular use case is a good example of when you can have a container with another running process. (e.g. see image for [gitlab](https://registry.hub.docker.com/u/sameersbn/gitlab/) as it runs multiple *mandatory* processes on the same container). – iTech Oct 24 '14 at 08:27
  • I wouldn't call an update mechanism tightly related to the main function of an image. This solution is like giving every application on a conventional machine its own update mechanism instead of placing the burden on a package manager. Though it is a solution, it doesn't answer my question which is, automatically updating local images and then, containers should be rerun. With updating in the containers themselves we're introducing a lot of state again of which we have no idea, which is against the docker way (again imho). – hbogert Oct 24 '14 at 08:44
  • 1
    You might need something higher-level than docker such as [`Kubernetes`](https://github.com/GoogleCloudPlatform/kubernetes) which is useful for large infrastructure deployment, but it is still under heavy development by Google. At the moment, you can automate this with a provisioning tool like Ansible in a fairly simple way. – iTech Oct 24 '14 at 08:56
  • Your quoted "different approach" might be what I was looking for. Your own contribution looks like a viable alternative for "fat containers". I'll definitely look into both a bit further, thanks for your answer. – Mathias Oct 25 '14 at 12:27
  • 1
    The solution described here doesn't actually fix many of the vulnerabilities you might have in your container. If you update packages that a running program is using, then it doesn't really change anything for that running program. It would have to be restarted for it to pick up the newly installed packages/files. So might look like you don't have any security issues, but you definitely still do. – Gijs Aug 30 '21 at 13:06
9

You would not know your container is behind without running docker pull. Then you'd need to rebuild or recompose your image.

docker pull image:tag
docker-compose -f docker-compose.yml -f production.yml up -d --build

The commands can be put in a script along with anything else necessary to complete the upgrade, although a proper container would not need anything additional.

None
  • 5,491
  • 1
  • 40
  • 51
seanmcl
  • 9,740
  • 3
  • 39
  • 45
  • 1: ok, but then I'd have to look at all my local images, get their base images, pull those. Then rebuild the images whose base images have changed. Then stop containers whose image is changed and recreate the containers with 'docker run' and needed parameters. This seems overly manual. But if this is the status quo, then I'll accept the answer. – hbogert Oct 17 '14 at 12:52
  • Please wait before you accept. Maybe there is something out there. I've been using docker for 6 months, but haven't been keeping up with the latest developments. – seanmcl Oct 17 '14 at 13:42
  • Somehow, internally, Docker is able to compare images in order to perform its 'caching' capability. Perhaps you can find a way to leverage THAT. In other words, check to see if the underlying images (all the way back to the base) has changed and then trigger a process to rebuild. Unfortunately, the caching will not help you in that case: the entire image will be rebuilt because the base image has changed. – Thom Parkin Oct 22 '14 at 12:32
7

Here is a simplest way to update docker container automatically

Put the job via $ crontab -e:

0 * * * * sh ~/.docker/cron.sh

Create dir ~/.docker with file cron.sh:

#!/bin/sh
if grep -Fqe "Image is up to date" << EOF
`docker pull ubuntu:latest`
EOF
then
    echo "no update, just do cleaning"
    docker system prune --force
else
    echo "newest exist, recompose!"
    cd /path/to/your/compose/file
    docker-compose down --volumes
    docker-compose up -d
fi
eQ19
  • 9,880
  • 3
  • 65
  • 77
  • Be aware of [Docker Hub rate limits](https://www.docker.com/increase-rate-limits/) when using scripts like this as cron jobs. They only allow 100 requests per 6hrs when unauthenticated. – void Jan 26 '23 at 09:58
5

Another approach could be to assume that your base image gets behind quite quickly (and that's very likely to happen), and force another image build of your application periodically (e.g. every week) and then re-deploy it if it has changed.

As far as I can tell, popular base images like the official Debian or Java update their tags to cater for security fixes, so tags are not immutable (if you want a stronger guarantee of that you need to use the reference [image:@digest], available in more recent Docker versions). Therefore, if you were to build your image with docker build --pull, then your application should get the latest and greatest of the base image tag you're referencing.

Since mutable tags can be confusing, it's best to increment the version number of your application every time you do this so that at least on your side things are cleaner.

So I'm not sure that the script suggested in one of the previous answers does the job, since it doesn't rebuild you application's image - it just updates the base image tag and then it restarts the container, but the new container still references the old base image hash.

I wouldn't advocate for running cron-type jobs in containers (or any other processes, unless really necessary) as this goes against the mantra of running only one process per container (there are various arguments about why this is better, so I'm not going to go into it here).

Bogdan
  • 1,796
  • 3
  • 15
  • 16
5

Dependency management for Docker images is a real problem. I'm part of a team that built a tool, MicroBadger, to help with this by monitoring container images and inspecting metadata. One of its features is to let you set up a notification webhook that gets called when an image you're interested in (e.g. a base image) changes.

Liz Rice
  • 589
  • 6
  • 9
  • microbadger.com seems to no longer exist ... I know it's been six years, but what happened? – oPless Apr 07 '22 at 22:44
5

There are a lot of answers here, but none of them suited my needs. I wanted an actual answer to the asker's #1 question. How do I know when an image is updated on hub.docker.com?

The below script can be run daily. On first run, it gets a baseline of the tags and update dates from the HUB registry and saves them locally. From then out, every time it is run it checks the registry for new tags and update dates. Since this changes every time a new image exists, it tells us if the base image has changed. Here is the script:

#!/bin/bash

DATAPATH='/data/docker/updater/data'

if [ ! -d "${DATAPATH}" ]; then
        mkdir "${DATAPATH}";
fi
IMAGES=$(docker ps --format "{{.Image}}")
for IMAGE in $IMAGES; do
        ORIGIMAGE=${IMAGE}
        if [[ "$IMAGE" != *\/* ]]; then
                IMAGE=library/${IMAGE}
        fi
        IMAGE=${IMAGE%%:*}
        echo "Checking ${IMAGE}"
        PARSED=${IMAGE//\//.}
        if [ ! -f "${DATAPATH}/${PARSED}" ]; then
                # File doesn't exist yet, make baseline
                echo "Setting baseline for ${IMAGE}"
                curl -s "https://registry.hub.docker.com/v2/repositories/${IMAGE}/tags/" > "${DATAPATH}/${PARSED}"
        else
                # File does exist, do a compare
                NEW=$(curl -s "https://registry.hub.docker.com/v2/repositories/${IMAGE}/tags/")
                OLD=$(cat "${DATAPATH}/${PARSED}")
                if [[ "${VAR1}" == "${VAR2}" ]]; then
                        echo "Image ${IMAGE} is up to date";
                else
                        echo ${NEW} > "${DATAPATH}/${PARSED}"
                        echo "Image ${IMAGE} needs to be updated";
                        H=`hostname`
                        ssh -i /data/keys/<KEYFILE> <USER>@<REMOTEHOST>.com "{ echo \"MAIL FROM: root@${H}\"; echo \"RCPT TO: <USER>@<EMAILHOST>.com\"; echo \"DATA\"; echo \"Subject: ${H} - ${IMAGE} needs update\"; echo \"\"; echo -e \"\n${IMAGE} needs update.\n\ndocker pull ${ORIGIMAGE}\"; echo \"\"; echo \".\"; echo \"quit\"; sleep 1; } | telnet <SMTPHOST> 25"
                fi

        fi
done;

You will want to alter the DATAPATH variable at the top, and alter the email notification command at the end to suit your needs. For me, I have it SSH into a server on another network where my SMTP is located. But you could easily use the mail command, too.

Now, you also want to check for updated packages inside the containers themselves. This is actually probably more effective than doing a "pull" once your containers are working. Here's the script to pull that off:

#!/bin/bash


function needsUpdates() {
        RESULT=$(docker exec ${1} bash -c ' \
                if [[ -f /etc/apt/sources.list ]]; then \
                grep security /etc/apt/sources.list > /tmp/security.list; \
                apt-get update > /dev/null; \
                apt-get upgrade -oDir::Etc::Sourcelist=/tmp/security.list -s; \
                fi; \
                ')
        RESULT=$(echo $RESULT)
        GOODRESULT="Reading package lists... Building dependency tree... Reading state information... Calculating upgrade... 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded."
        if [[ "${RESULT}" != "" ]] && [[ "${RESULT}" != "${GOODRESULT}" ]]; then
                return 0
        else
                return 1
        fi
}

function sendEmail() {
        echo "Container ${1} needs security updates";
        H=`hostname`
        ssh -i /data/keys/<KEYFILE> <USRER>@<REMOTEHOST>.com "{ echo \"MAIL FROM: root@${H}\"; echo \"RCPT TO: <USER>@<EMAILHOST>.com\"; echo \"DATA\"; echo \"Subject: ${H} - ${1} container needs security update\"; echo \"\"; echo -e \"\n${1} container needs update.\n\n\"; echo -e \"docker exec ${1} bash -c 'grep security /etc/apt/sources.list > /tmp/security.list; apt-get update > /dev/null; apt-get upgrade -oDir::Etc::Sourcelist=/tmp/security.list -s'\n\n\"; echo \"Remove the -s to run the update\"; echo \"\"; echo \".\"; echo \"quit\"; sleep 1; } | telnet <SMTPHOST> 25"
}

CONTAINERS=$(docker ps --format "{{.Names}}")
for CONTAINER in $CONTAINERS; do
        echo "Checking ${CONTAINER}"
        if needsUpdates $CONTAINER; then
                sendEmail $CONTAINER
        fi
done
Fmstrat
  • 1,492
  • 2
  • 17
  • 24
  • 1
    mkdir in first script should probably be: mkdir -p In addition, first script compares VAR1 against VAR2, assume that should compare OLD against NEW. If true though, this means that this script won't really do what OP wants, UNLESS it was first run at time of install. That is, it isn't really determining anything about what is installed, just if the results differ from previous runs... – JoeG Aug 02 '17 at 16:04
  • Also, it only retrieves the first page of results from the Docker Hub web service. For images with lots of tags, the installed one might not be on the first page at all. – ʇsәɹoɈ Sep 27 '20 at 22:45
5

UPDATE: Use Dependabot - https://dependabot.com/docker/

BLUF: finding the right insertion point for monitoring changes to a container is the challenge. It would be great if DockerHub would solve this. (Repository Links have been mentioned but note when setting them up on DockerHub - "Trigger a build in this repository whenever the base image is updated on Docker Hub. Only works for non-official images.")

While trying to solve this myself I saw several recommendations for webhooks so I wanted to elaborate on a couple of solutions I have used.

  1. Use microbadger.com to track changes in a container and use it's notification webhook feature to trigger an action. I set this up with zapier.com (but you can use any customizable webhook service) to create a new issue in my github repository that uses Alpine as a base image.

    • Pros: You can review the changes reported by microbadger in github before taking action.
    • Cons: Microbadger doesn't let you track a specific tag. Looks like it only tracks 'latest'.
  2. Track the RSS feed for git commits to an upstream container. ex. https://github.com/gliderlabs/docker-alpine/commits/rootfs/library-3.8/x86_64. I used zapier.com to monitor this feed and to trigger an automatic build of my container in Travis-CI anytime something is committed. This is a little extreme but you can change the trigger to do other things like open an issue in your git repository for manual intervention.

    • Pros: Closer to an automated pipline. The Travis-CI build just checks to see if your container has issues with whatever was committed to the base image repository. It's up to you if your CI service takes any further action.
    • Cons: Tracking the commit feed isn't perfect. Lots of things get committed to the repository that don't affect the build of the base image. Doesn't take in to account any issues with frequency/number of commits and any API throttling.
2stacks
  • 61
  • 1
  • 5
4

I'm not going into the whole question of whether or not you want unattended updates in production (I think not). I'm just leaving this here for reference in case anybody finds it useful. Update all your docker images to the latest version with the following command in your terminal:

# docker images | awk '(NR>1) && ($2!~/none/) {print $1":"$2}' | xargs -L1 docker pull

Meferdati
  • 141
  • 3
  • 1
    The command is useful for updating all of the images, but it doesn't change anything running in production. The containers still stem from the old images, which are now untagged. – None Oct 16 '17 at 20:17
  • True. And here's one more for the books... Use `# docker system prune -a --volumes -f` to clean up old (dangling) images, volumes etc. – Meferdati Oct 24 '18 at 19:02
3

Premise to my answer:

  1. Containers are run with tags.
  2. The same tag can be pointed to different image UUID as we please/ feel appropriate.
  3. Updates done to an image can be committed to a new image layer

Approach

  1. Build all the containers in the first place with a security-patch update script
  2. Build an automated process for the following
    • Run an existing image to new container with security patch script as the command
    • Commit changes to the image as
      • existing tag -> followed by restarting the containers one by one
      • new version tag -> replace few containers with new tag -> validate -> move all containers to new tag

Additionally, the base image can be upgraded/ the container with a complete new base image can be built at regular intervals, as the maintainer feels necessary

Advantages

  1. We are preserving the old version of the image while creating the new security patched image, hence we can rollback to previous running image if necessary
  2. We are preserving the docker cache, hence less network transfer (only the changed layer gets on the wire)
  3. The upgrade process can be validated in staging before moving to prod
  4. This can be a controlled process, hence the security patches only when necessary/ deemed important can be pushed.
Phani
  • 1,851
  • 2
  • 15
  • 28
  • On a production environment, though they are security updates, I doubt you would want to have unattended updates! If having unattended updates is necessary, the process can be run at regular intervals (as appropriate) as a cron job. – Phani Nov 13 '15 at 15:48
  • 1
    My premise is that security updates should come from upstream/base images. – hbogert Nov 15 '15 at 15:44
  • @hbogert I would rather say there is a fine line of differentiation between theory and practice. When things come into practice, there will be many external aspects that need to be taken into account, like: cost (not only dollar value, but also time) associated with the implementation. – Phani Nov 24 '15 at 07:00
3

Above Answers are also correct

There are two Approach

  1. Use webhooks
  2. Run script for every specific minute to get fresh pull of docker images

I am just sharing script may be it will helpful for you! You can use it with cronjob, I tried succesfully on OSX

#!/bin/bash
##You can use below commented line for setting cron tab for running cron job and to store its O/P in one .txt file  
#* * * * * /usr/bin/sudo -u admin -i bash -c /Users/Swapnil/Documents/checkimg.sh > /Users/Swapnil/Documents/cron_output.log 2>&1
# Example for the Docker Hub V2 API
# Returns all images and tags associated with a Docker Hub organization account.
# Requires 'jq': https://stedolan.github.io/jq/

# set username, password, and organization
# Filepath where your docker-compose file is present
FILEPATH="/Users/Swapnil/Documents/lamp-alpine"
# Your Docker hub user name
UNAME="ur username"
# Your Docker hub user password
UPASS="ur pwd"
# e.g organisation_name/image_name:image_tag
ORG="ur org name"
IMGNAME="ur img name"
IMGTAG="ur img tag"
# Container name
CONTNAME="ur container name"
# Expected built mins
BUILDMINS="5"
#Generally cronjob frequency
CHECKTIME="5"
NETWORKNAME="${IMGNAME}_private-network"
#After Image pulling, need to bring up all docker services?
DO_DOCKER_COMPOSE_UP=true
# -------
echo "Eecuting Script @ date and time in YmdHMS: $(date +%Y%m%d%H%M%S)"
set -e
PIDFILE=/Users/Swapnil/Documents/$IMGNAME/forever.pid
if [ -f $PIDFILE ]
then
  PID=$(cat $PIDFILE)
  ps -p $PID > /dev/null 2>&1
  if [ $? -eq 0 ]
  then
    echo "Process already running"
    exit 1
  else
    ## Process not found assume not running
    echo $$
    echo $$ > $PIDFILE
    if [ $? -ne 0 ]
    then
      echo "Could not create PID file"
      exit 1
    fi
  fi
else
  echo $$ > $PIDFILE
  if [ $? -ne 0 ]
  then
    echo "Could not create PID file"
    exit 1
  fi
fi

# Check Docker is running or not; If not runing then exit
if docker info|grep Containers ; then
    echo "Docker is running"
else
    echo "Docker is not running"
    rm $PIDFILE
    exit 1
fi

# Check Container is running or not; and set variable
CONT_INFO=$(docker ps -f "name=$CONTNAME" --format "{{.Names}}")
if [ "$CONT_INFO" = "$CONTNAME" ]; then
    echo "Container is running"
    IS_CONTAINER_RUNNING=true
else
    echo "Container is not running"
    IS_CONTAINER_RUNNING=false
fi


# get token
echo "Retrieving token ..."
TOKEN=$(curl -s -H "Content-Type: application/json" -X POST -d '{"username": "'${UNAME}'", "password": "'${UPASS}'"}' https://hub.docker.com/v2/users/login/ | jq -r .token)

# get list of repositories
echo "Retrieving repository list ..."
REPO_LIST=$(curl -s -H "Authorization: JWT ${TOKEN}" https://hub.docker.com/v2/repositories/${ORG}/?page_size=100 | jq -r '.results|.[]|.name')

# output images & tags
echo "Images and tags for organization: ${ORG}"
echo
for i in ${REPO_LIST}
do
  echo "${i}:"
  # tags
  IMAGE_TAGS=$(curl -s -H "Authorization: JWT ${TOKEN}" https://hub.docker.com/v2/repositories/${ORG}/${i}/tags/?page_size=100 | jq -r '.results|.[]|.name')
  for j in ${IMAGE_TAGS}
  do
    echo "  - ${j}"
  done
  #echo
done

# Check Perticular image is the latest or not
#imm=$(curl -s -H "Authorization: JWT ${TOKEN}" https://hub.docker.com/v2/repositories/${ORG}/${IMGNAME}/tags/?page_size=100)
echo "-----------------"
echo "Last built date details about Image ${IMGNAME} : ${IMGTAG} for organization: ${ORG}"
IMAGE_UPDATED_DATE=$(curl -s -H "Authorization: JWT ${TOKEN}" https://hub.docker.com/v2/repositories/${ORG}/${IMGNAME}/tags/?page_size=100 | jq -r '.results|.[]|select(.name | contains("'${IMGTAG}'")).last_updated')
echo "On Docker Hub IMAGE_UPDATED_DATE---$IMAGE_UPDATED_DATE"
echo "-----------------"

IMAGE_CREATED_DATE=$(docker image inspect ${ORG}/${IMGNAME}:${IMGTAG} | jq -r '.[]|.Created')
echo "Locally IMAGE_CREATED_DATE---$IMAGE_CREATED_DATE"

updatedDate=$(date -jf '%Y-%m-%dT%H:%M' "${IMAGE_UPDATED_DATE:0:16}" +%Y%m%d%H%M%S) 
createdDate=$(date -jf '%Y-%m-%dT%H:%M' "${IMAGE_CREATED_DATE:0:16}" +%Y%m%d%H%M%S)
currentDate=$(date +%Y%m%d%H%M%S)

start_date=$(date -jf "%Y%m%d%H%M%S" "$currentDate" "+%s")
end_date=$(date -jf "%Y%m%d%H%M%S" "$updatedDate" "+%s")
updiffMins=$(( ($start_date - $end_date) / (60) ))
if [[ "$updiffMins" -lt $(($CHECKTIME+1)) ]]; then
        if [ ! -d "${FILEPATH}" ]; then
            mkdir "${FILEPATH}";
        fi
        cd "${FILEPATH}"
        pwd
        echo "updatedDate---$updatedDate" > "ScriptOutput_${currentDate}.txt"
        echo "createdDate---$createdDate" >> "ScriptOutput_${currentDate}.txt"
        echo "currentDate---$currentDate" >> "ScriptOutput_${currentDate}.txt"
        echo "Found after regular checking time -> Docker hub's latest updated image is new; Diff ${updiffMins} mins" >> "ScriptOutput_${currentDate}.txt"
        echo "Script is checking for latest updates after every ${CHECKTIME} mins" >> "ScriptOutput_${currentDate}.txt"
        echo "Fetching all new"
        echo "---------------------------"
        if $IS_CONTAINER_RUNNING ; then
            echo "Container is running"         
        else
            docker-compose down
            echo "Container stopped and removed; Network removed" >> "ScriptOutput_${currentDate}.txt"
        fi
        echo "Image_Created_Date=$currentDate" > ".env"
        echo "ORG=$ORG" >> ".env"
        echo "IMGNAME=$IMGNAME" >> ".env"
        echo "IMGTAG=$IMGTAG" >> ".env"
        echo "CONTNAME=$CONTNAME" >> ".env"
        echo "NETWORKNAME=$NETWORKNAME" >> ".env"
        docker-compose build --no-cache
        echo "Docker Compose built" >> "ScriptOutput_${currentDate}.txt"
        if $DO_DOCKER_COMPOSE_UP ; then
            docker-compose up -d
            echo "Docker services are up now, checked in" >> "ScriptOutput_${currentDate}.txt"  
        else
            echo "Docker services are down, checked in" >> "ScriptOutput_${currentDate}.txt"
        fi
elif [[ "$updatedDate" -gt "$createdDate" ]]; then 
    echo "Updated is latest"
    start_date=$(date -jf "%Y%m%d%H%M%S" "$updatedDate" "+%s")
    end_date=$(date -jf "%Y%m%d%H%M%S" "$createdDate" "+%s")
    diffMins=$(( ($start_date - $end_date) / (60) ))
    if [[ "$BUILDMINS" -lt "$diffMins" ]]; then
        if [ ! -d "${FILEPATH}" ]; then
            mkdir "${FILEPATH}";
        fi
        cd "${FILEPATH}"
        pwd
        echo "updatedDate---$updatedDate" > "ScriptOutput_${currentDate}.txt"
        echo "createdDate---$createdDate" >> "ScriptOutput_${currentDate}.txt"
        echo "currentDate---$currentDate" >> "ScriptOutput_${currentDate}.txt"
        echo "Found after comparing times -> Docker hub's latest updated image is new; Diff ${diffMins} mins" >> "ScriptOutput_${currentDate}.txt"
        echo "Actual image built time is less i.e. ${diffMins} mins than MAX expexted BUILD TIME i.e. ${BUILDMINS} mins" >> "ScriptOutput_${currentDate}.txt"
        echo "Fetching all new" >> "ScriptOutput_${currentDate}.txt"
        echo "-----------------------------"
        if $IS_CONTAINER_RUNNING ; then
            echo "Container is running"         
        else
            docker-compose down
            echo "Container stopped and removed; Network removed" >> "ScriptOutput_${currentDate}.txt"
        fi
        echo "Image_Created_Date=$currentDate" > ".env"
        echo "ORG=$ORG" >> ".env"
        echo "IMGNAME=$IMGNAME" >> ".env"
        echo "IMGTAG=$IMGTAG" >> ".env"
        echo "CONTNAME=$CONTNAME" >> ".env"
        echo "NETWORKNAME=$NETWORKNAME" >> ".env"
        docker-compose build --no-cache
        echo "Docker Compose built" >> "ScriptOutput_${currentDate}.txt"
        if $DO_DOCKER_COMPOSE_UP ; then
            docker-compose up -d
            echo "Docker services are up now" >> "ScriptOutput_${currentDate}.txt"  
        else
            echo "Docker services are down" >> "ScriptOutput_${currentDate}.txt"
        fi
    elif [[ "$BUILDMINS" -gt "$diffMins" ]]; then
        echo "Docker hub's latest updated image is NOT new; Diff ${diffMins} mins"
        echo "Docker images not fetched"
    else
        echo "Docker hub's latest updated image is NOT new; Diff ${diffMins} mins"
        echo "Docker images not fetched"
    fi
elif [[ "$createdDate" -gt "$updatedDate" ]]; then 
    echo "Created is latest"
    start_date=$(date -jf "%Y%m%d%H%M%S" "$createdDate" "+%s")
    end_date=$(date -jf "%Y%m%d%H%M%S" "$updatedDate" "+%s")
    echo "Docker hub has older docker image than local; Older than $(( ($start_date - $end_date) / (60) ))mins"
fi
echo 
echo "------------end---------------"
rm $PIDFILE

Here is my docker-compose file

version:  "3.2"
services:
  lamp-alpine:
    build:
      context: .
    container_name: "${CONTNAME}"
    image: "${ORG}/${IMGNAME}:${IMGTAG}"
    ports:
      - "127.0.0.1:80:80"
    networks:
      - private-network 

networks:
  private-network:
    driver: bridge
1

have you tried this: https://github.com/v2tec/watchtower. it's a simple tool running in docker container watching other containers, if their base image changed, it will pull and redeploy.

linehrr
  • 1,668
  • 19
  • 24
0

A simple and great solution is shepherd

user672009
  • 4,379
  • 8
  • 44
  • 77
  • iiuc, this does not help in the general sense, because this is coupled to Swarm and only *restarts* upstream, whereas we want to react, rebuild,etc on upstream changes and not simply restart. – hbogert Nov 12 '18 at 08:30
  • That sounds like something you should do in a CI pipeline – user672009 Nov 12 '18 at 11:21