69

I am trying to build a docker image that requires copying some large files (~75GB) and I'm getting the following error when I run the following:

$ docker build -t my-project .
Step 8/10 : COPY some-large-directory
  failed to copy files: failed to copy directory: Error processing tar file(exit status 1): write /directory: no space left on device

I think the error is caused by the container running out of space (I already increased the docker disk image size to 160GB using docker preferences). If so how can I increase the maximum container size on macOS (high sierra) (I read the default is 100GB)?

Brad Koch
  • 19,267
  • 19
  • 110
  • 137
Nick Fernandez
  • 1,160
  • 1
  • 10
  • 24

3 Answers3

109

Docker objects accumulate over time, so with heavy usage you can eventually exhaust the available storage. You can confirm that this is the issue with docker system df:

docker system df

You can start reclaiming some space by doing a prune of your unused objects, a couple examples:

# Prune everything
docker system prune

# Only prune images
docker image prune

Be sure to read the Prune unused Docker objects documentation first to make sure you understand what you'll be discarding.

If you really can't reclaim enough space by discarding objects, then you can always allocate more disk space.

Brad Koch
  • 19,267
  • 19
  • 110
  • 137
  • 4
    It might be worth mentioning that `docker system prune` will let you know what it's going to prune (and you'll have to confirm) before it's pruning anything. – Idan Gozlan Jan 20 '21 at 23:06
  • 1
    `docker builder prune` did the trick for me -- my build cache was using 47GB, and was primarily responsible for my docker machine running out of space. – hbd Apr 09 '21 at 05:14
  • 2
    `docker system df` will also tell you how much space the local volumes are taking. `docker volume prune` will remove all the unused local volumes. – giavac Jul 27 '21 at 12:53
40

Instructions on how to increase space:

https://forums.docker.com/t/no-space-left-on-device-error/10894/26?u=adnan

Be aware that untagged images and old containers can take up loads of space.

To delete untagged images use:

docker images (to see what the extent of the issue is), then

docker rmi -f $(docker images | grep "<none>" | awk "{print \$3}")

and similarly for containers, try something like

docker rm -f $(docker ps -aq) (this will remove all containers, so be careful)

Updated 2020:

docker system prune is also a quick method of removing old containers and untagged images.

PaulNUK
  • 4,774
  • 2
  • 30
  • 58
14

Increasing the space available to docker using docker preferences in macOS ultimately fixed the problem.

Nick Fernandez
  • 1,160
  • 1
  • 10
  • 24
  • 7
    True, but being aware of having hundreds of images is quite important, so it's better to run the suggested cleanup commands before increasing the space available to docker, because quite likely there is no need for it. – Ing. Luca Stucchi Mar 18 '19 at 10:25
  • If you'l want to decrease the space available to docker, you will be prompted that after that you'll lose all your images, containers and volumes – Ilya Kolesnikov Jun 10 '22 at 09:39
  • The setting was Resources -> Advanced -> Virtual disk limit. – Dan Dascalescu Jun 28 '23 at 22:59