8

I've a problem when pruning docker. After building images, I run "docker system prune --volumes -a -f" but it's not releasing space from "/var/lib/docker/overlay2". See below please

Before building the image, disk space & /var/lib/docker/overlay2 size:

    ubuntu@xxx:~/tmp/app$ df -hv
    Filesystem      Size  Used Avail Use% Mounted on
    udev            1.9G     0  1.9G   0% /dev
    tmpfs           390M  5.4M  384M   2% /run
    /dev/nvme0n1p1   68G   20G   49G  29% /
    tmpfs           2.0G  8.0K  2.0G   1% /dev/shm
    tmpfs           5.0M     0  5.0M   0% /run/lock
    tmpfs           2.0G     0  2.0G   0% /sys/fs/cgroup
    tmpfs           390M     0  390M   0% /run/user/1000
    ubuntu@xxx:~/tmp/app$ sudo du -hs /var/lib/docker/overlay2
    8.0K    /var/lib/docker/overlay2

Building the image

    ubuntu@xxx:~/tmp/app$ docker build -f ./Dockerfile .
    Sending build context to Docker daemon  1.027MB
    Step 1/12 : FROM mhart/alpine-node:9 as base
    9: Pulling from mhart/alpine-node
    ff3a5c916c92: Pull complete 
    c77918da3c72: Pull complete 
    Digest: sha256:3c3f7e30beb78b26a602f12da483d4fa0132e6d2b625c3c1b752c8a8f0fbd359
    Status: Downloaded newer image for mhart/alpine-node:9
     ---> bd69a82c390b
    .....
    ....
    Successfully built d56be87e90a4

Sizes after image built:

    ubuntu@xxx:~/tmp/app$ df -hv
    Filesystem      Size  Used Avail Use% Mounted on
    udev            1.9G     0  1.9G   0% /dev
    tmpfs           390M  5.4M  384M   2% /run
    /dev/nvme0n1p1   68G   21G   48G  30% /
    tmpfs           2.0G  8.0K  2.0G   1% /dev/shm
    tmpfs           5.0M     0  5.0M   0% /run/lock
    tmpfs           2.0G     0  2.0G   0% /sys/fs/cgroup
    tmpfs           390M     0  390M   0% /run/user/1000
    ubuntu@xxx:~/tmp/app$ sudo du -hs /var/lib/docker/overlay2
    3.9G    /var/lib/docker/overlay2
    ubuntu@xxx:~/tmp/app$ docker system prune -af --volumes
    Deleted Images:
    deleted: sha256:ef4973a39ce03d2cc3de36d8394ee221b2c23ed457ffd35f90ebb28093b40881
    deleted: sha256:c3a0682422b4f388c501e29b446ed7a0448ac6d9d28a1b20e336d572ef4ec9a8
    deleted: sha256:6988f1bf347999f73b7e505df6b0d40267dc58bbdccc820cdfcecdaa1cb2c274
    deleted: sha256:50aaadb4b332c8c1fafbe30c20c8d6f44148cae7094e50a75f6113f27041a880
    untagged: alpine:3.6
    untagged: alpine@sha256:ee0c0e7b6b20b175f5ffb1bbd48b41d94891b0b1074f2721acb008aafdf25417
    deleted: sha256:d56be87e90a44c42d8f1c9deb188172056727eb79521a3702e7791dfd5bfa7b6
    deleted: sha256:067da84a69e4a9f8aa825c617c06e8132996eef1573b090baa52cff7546b266d
    deleted: sha256:72d4f65fefdf8c9f979bfb7bce56b9ba14bb9e1f7ca676e1186066686bb49291
    deleted: sha256:037b7c3cb5390cbed80dfa511ed000c7cf3e48c30fb00adadbc64f724cf5523a
    deleted: sha256:796fd2c67a7bc4e64ebaf321b2184daa97d7a24c4976b64db6a245aa5b1a3056
    deleted: sha256:7ac06e12664b627d75cd9e43ef590c54523f53b2d116135da9227225f0e2e6a8
    deleted: sha256:40993237c00a6d392ca366e5eaa27fcf6f17b652a2a65f3afe33c399fff1fb44
    deleted: sha256:bafcf3176fe572fb88f86752e174927f46616a7cf97f2e011f6527a5c1dd68a4
    deleted: sha256:bbcc764a2c14c13ddbe14aeb98815cd4f40626e19fb2b6d18d7d85cc86b65048
    deleted: sha256:c69cad93cc00af6cc39480846d9dfc3300c580253957324872014bbc6c80e263
    deleted: sha256:97a19d85898cf5cba6d2e733e2128c0c3b8ae548d89336b9eea065af19eb7159
    deleted: sha256:43773d1dba76c4d537b494a8454558a41729b92aa2ad0feb23521c3e58cd0440
    deleted: sha256:721384ec99e56bc06202a738722bcb4b8254b9bbd71c43ab7ad0d9e773ced7ac
    untagged: mhart/alpine-node:9
    untagged: mhart/alpine-node@sha256:3c3f7e30beb78b26a602f12da483d4fa0132e6d2b625c3c1b752c8a8f0fbd359
    deleted: sha256:bd69a82c390b85bfa0c4e646b1a932d4a92c75a7f9fae147fdc92a63962130ff

    Total reclaimed space: 122.2MB

It's releasing only 122.2 MB. Sizes after prune:

    ubuntu@xxx:~/tmp/app$ df -hv
    Filesystem      Size  Used Avail Use% Mounted on
    udev            1.9G     0  1.9G   0% /dev
    tmpfs           390M  5.4M  384M   2% /run
    /dev/nvme0n1p1   68G   20G   48G  30% /
    tmpfs           2.0G  8.0K  2.0G   1% /dev/shm
    tmpfs           5.0M     0  5.0M   0% /run/lock
    tmpfs           2.0G     0  2.0G   0% /sys/fs/cgroup
    tmpfs           390M     0  390M   0% /run/user/1000
    ubuntu@xxx:~/tmp/app$ sudo du -hs /var/lib/docker/overlay2
    3.7G    /var/lib/docker/overlay2

As you can see, there are 0 containers/images:

    ubuntu@xxx:~/tmp/app$ docker ps -a
    CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
    ubuntu@xxx:~/tmp/app$ docker images -a
    REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE

But the size of "/var/lib/docker/overlay2" has only decreased from 3.9G to 3.7G. If I build more than one image, it's increses every time. This is the dockerfile I'm building:

    FROM mhart/alpine-    node:9 as base
    RUN apk add --no-cache make gcc g++ python
    WORKDIR /app
    COPY package.json /app
    RUN npm install --silent

    # Only copy over the node pieces we need from the above image
    FROM alpine:3.6
    COPY --from=base /usr/bin/node /usr/bin/
    COPY --from=base /usr/lib/libgcc* /usr/lib/libstdc* /usr/lib/
    WORKDIR /app
    COPY --from=base /app .
    COPY . .
    CMD ["node", "server.js"]

Why it's not cleaning overlay2 folder? how can I handle this? is there a solution? is it a known bug?

Konrad Rudolph
  • 530,221
  • 131
  • 937
  • 1,214
Nahuel
  • 91
  • 1
  • 3
  • Did you check `docker volume ls` – rdas May 09 '19 at 20:12
  • Yes. There's nothing there. Volumes were removed when I did "docker system prune -af --volumes" – Nahuel May 09 '19 at 20:53
  • look if anything here helps you [Some way to clean up / identify contents of /var/lib/docker/overlay](https://forums.docker.com/t/some-way-to-clean-up-identify-contents-of-var-lib-docker-overlay/30604/24) – matanper May 09 '19 at 20:58
  • Do you have more than one instance of dockerd running on that machine? Docker in Docker, Snap based install, etc? What files are in that directory? Is your DOCKER_HOST variable set? (`echo $DOCKER_HOST`) – BMitch May 09 '19 at 23:48
  • nothing helped there @matanper – Nahuel May 10 '19 at 13:12
  • no, I don't have more than one instances running @BMitch. I don't have configured DOCKER_HOST variable, is it necessary? – Nahuel May 10 '19 at 13:13
  • The variable isn't needed, I'm trying to make sure the client is talking to the same server where you're looking at the filesystem. What files do you see in the directory? – BMitch May 10 '19 at 13:35
  • ```ubuntu@xxx:~$ sudo ls -la /var/lib/docker/overlay2 total 2568 drwx------ 616 root root 69632 May 13 12:40 . drwx--x--x 14 root root 4096 May 9 15:42 .. drwx------ 4 root root 4096 May 10 17:24 002ac961ae0627fc0e50084fa31582bda36946e9c69626a229e880ae0ba33407 drwx------ 5 root root 4096 May 10 17:02 0030b266af7169f7dc0ca7c310e7228dde9e36036cb9d6683d1c88549ef81fbc drwx------ 5 root root 4096 May 10 17:14 010dc160fa75f896994987fc446856f31c368014a01f1991538078821dd332a3 drwx------ 5 root root 4096 May 10 17:23 ``` – Nahuel May 13 '19 at 13:42
  • 5
    Are you able to find a proper solution for this problem? I'm having same issue. – Şeref Acet Jan 29 '20 at 14:04
  • Did you try `docker system prune --all` . For me this solution works all the times. – Antonio Petricca May 18 '21 at 08:23

3 Answers3

1

It's probably logs or some unneeded files in the overlay2 folder, not Docker images that are the problem.

Try sudo du /var/lib/docker/overlay2

The following worked for me - to show me the exact culprit folders:

sudo -s cd / df -h

cd to the culprit folder(s) then rm *

Chris Halcrow
  • 28,994
  • 18
  • 176
  • 206
1

A bare docker system prune will not delete:

  • running containers
  • tagged images
  • volumes

The big things it does delete are stopped containers and untagged images. You can pass flags to docker system prune to delete images and volumes, just realize that images could have been built locally and would need to be recreated, and volumes may contain data you want to backup/save:

$ docker system prune --help

Usage:  docker system prune [OPTIONS]

Remove unused data

Options:
  -a, --all             Remove all unused images not just dangling ones
      --filter filter   Provide filter values (e.g. 'label=<key>=<value>')
  -f, --force           Do not prompt for confirmation
      --volumes         Prune volumes

What this still doesn't prune are:

  • running containers
  • images used by those containers
  • volumes used by those containers

Other storage associated with a running container are container logs (docker logs on a container shows these) and filesystem changes made by the container (docker diff shows what has been changed in the container filesystem). To clean logs, see this answer on how you can configure a default limit for all new containers, and the risks of manually deleting logs in a running container.

In this case, it looks like there are still files in overlay2 even when all containers are stopped and deleted. First, realize these are just directories of files, you can dig into each one and see what files are there. These are layers from the overlay filesystem, so deleting them can result in a broken environment since they are referenced from other part of the docker filesystem. There are several possible causes I can think of for this:

  • Corruption with the docker engine, perhaps folders were deleted manually outside of docker resulting in it losing track of various overlay directories being used. Or perhaps from the harddrive filling the engine started to create a layer and lost track of it. Restarting the docker engine may help it resync these folders.
  • You are looking at a different docker engine. E.g. if you are running rootless containers, those are in your user's home directory rather than /var/lib/docker. Or if you configure docker context or set $DOCKER_HOST, you may be running commands against a remote docker engine and not pruning your local directories.

Since you have deleted all containers already, and have no other data to preserve like volumes, it's safe to completely reset docker. This can be done with:

# DANGER, this will reset docker, deleting all containers, images, and volumes
sudo -s
systemctl stop docker
rm -rf /var/lib/docker
systemctl start docker
exit

Importantly, you should not delete individual files and directories from overlay2. See this answer for the issues that occur if you do that. Instead, the above is a complete wipe of the docker folder returning to an initial empty state.

BMitch
  • 231,797
  • 42
  • 475
  • 450
0

On Docker Desktop for Mac, I was suddenly bumping into this all the time. Resizing the image in ~/Library/Containers/com.docker.docker/Data/vms/0/data/Docker.raw to be bigger (default was something like 54Gb, upped it to 128Gb) allowed me to proceed at least for the time being.

The guidance I could find mainly suggested reducing its size if you are running up against the size limits of your hard drive, but I have plenty of space there.

tripleee
  • 175,061
  • 34
  • 275
  • 318