7

I have about 50K of images in my local ubuntu:

$ docker info
Containers: 3
Images: 49708
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 49714

docker rmi -f $(docker images | grep "something" | awk "{print \$3}")

Takes 100% CPU and is too slow. Is there any fast way to delete a bulk of images?

shayy
  • 1,272
  • 2
  • 16
  • 26
  • Possible duplicate of [How to remove old and unused Docker images](https://stackoverflow.com/questions/32723111/how-to-remove-old-and-unused-docker-images) – kenorb Apr 13 '18 at 00:27

2 Answers2

9

So Answering my own question, there is a script in docker contrib directory: https://github.com/docker/docker/blob/620339f166984540f15aadef2348646eee9a5b42/contrib/nuke-graph-directory.sh

Running it with sudo deleted all my images, just restart docker daemon and you are good to go.

shayy
  • 1,272
  • 2
  • 16
  • 26
2

Seems that you need to clean /var/lib/docker/aufs. I didn't do it, and this is probably insecure.

But first give a try to use these aliases written by ndk:

# Kill all running containers.
alias dockerkillall='docker kill $(docker ps -q)'

# Delete all stopped containers.
alias dockercleanc='printf "\n>>> Deleting stopped containers\n\n" && docker rm $(docker ps -a -q)'

# Delete all untagged images.
alias dockercleani='printf "\n>>> Deleting untagged images\n\n" && docker rmi $(docker images -q -f dangling=true)'

# Delete all stopped containers and untagged images.
alias dockerclean='dockercleanc || true && dockercleani'

# Delete all
alias dockercleanup='dockerkillall || true && dockercleanc || true && dockercleani'

UPD: Several new methods of manual cleanup have been introduced since the publication of this answer. I will allow myself to quote the code of GitHub user adamhadani (requires docker-py):

#!/usr/bin/env python
"""
Check all existing Docker containers for their mapped paths, and then purge any
zombie directories in docker's volumes directory which don't correspond to an
existing container.
"""
import logging
import os
import sys
from shutil import rmtree

import docker


DOCKER_VOLUMES_DIR = "/var/lib/docker/vfs/dir"


def get_immediate_subdirectories(a_dir):
    return [os.path.join(a_dir, name) for name in os.listdir(a_dir)
            if os.path.isdir(os.path.join(a_dir, name))]


def main():
    logging.basicConfig(level=logging.INFO)

client = docker.Client()

valid_dirs = []
for container in client.containers(all=True):
    volumes = client.inspect_container(container['Id'])['Volumes']
    if not volumes:
        continue

    for _, real_path in volumes.iteritems():
        if real_path.startswith(DOCKER_VOLUMES_DIR):
            valid_dirs.append(real_path)

all_dirs = get_immediate_subdirectories(DOCKER_VOLUMES_DIR)
invalid_dirs = set(all_dirs).difference(valid_dirs)

logging.info("Purging %s dangling Docker volumes out of %s total volumes found.",
             len(invalid_dirs), len(all_dirs))
for invalid_dir in invalid_dirs:
    logging.info("Purging directory: %s", invalid_dir)
    rmtree(invalid_dir)

logging.info("All done.")


if __name__ == "__main__":
    sys.exit(main())
Vitaly Isaev
  • 5,392
  • 6
  • 45
  • 64
  • 1
    The rmi with the dangling also takes forever. I don't want to delete all of the images, this is why the grep "something".. Messing around in the filesystem is unfeasible since I need to keep some of the images and not just perform rm -rf .. – shayy Apr 16 '15 at 13:52