7

I am pulling a variety of docker images from my AWS, but it keeps getting stuck on the final image with the following error

ERROR: for <container-name>  failed to register layer: Error processing tar file(exit status 1): symlink libasprintf.so.0.0.0 /usr/lib64/libasprintf.so: no space left on device
ERROR: failed to register layer: Error processing tar file(exit status 1): symlink libasprintf.so.0.0.0 /usr/lib64/libasprintf.so: no space left on device

Does anyone know how to fix this problem?

I have tried stopping docker, removing var/lib/docker and starting it back up again but it gets stuck at the same place

result of

df -h

Filesystem      Size  Used Avail Use% Mounted on

/dev/nvme0n1p1  8.0G  6.5G  1.6G  81% /

devtmpfs        3.7G     0  3.7G   0% /dev

tmpfs           3.7G     0  3.7G   0% /dev/shm

tmpfs           3.7G   17M  3.7G   1% /run

tmpfs           3.7G     0  3.7G   0% /sys/fs/cgroup

tmpfs           753M     0  753M   0% /run/user/0

tmpfs           753M     0  753M   0% /run/user/1000
J.Main
  • 173
  • 1
  • 9

3 Answers3

10

The issue was with the EC2 instance not having enough EBS storage assigned to it. Following these steps will fix it:

  • Navigate to ec2
  • Look at the details of your instance and locate root device and block device
  • press the path and select EBS ID
  • click actions in the volume panel
  • select modify volume
  • enter the desired volume size (default is 8GB, shouldn’t need much more)
  • ssh into instance
  • run lsblk to see available volumes and note the size
  • run sudo growpart /dev/volumename 1 on the volume you want to resize
  • run sudo xfs_growfs /dev/volumename (the one with / in mountpoint column of lsblk)
J.Main
  • 173
  • 1
  • 9
  • For volumes that have a partition, such as the volumes shown in the previous step, use the growpart command to extend the partition. Notice that there is a space between the device name and the partition number. [ec2-user ~]$ sudo growpart /dev/xvda 1 [ec2-user ~]$ sudo growpart /dev/xvdf 1 – almgwary May 12 '22 at 23:28
2

I wrote an article about this after struggling with the same issue. If you have deployed successfully before, you may just need to add some maintenance to your deploy process. In my case, I just added cronjob to run the following:

docker ps -q --filter "status=exited" | xargs --no-run-if-empty docker rm;
docker volume ls -qf dangling=true | xargs -r docker volume rm;

https://medium.com/@_ifnull/aws-ecs-no-space-left-on-device-ce00461bb3cb

Daniel Smith
  • 1,044
  • 10
  • 16
  • This was the solution for me using CDK on Mac. Note that `--no-run-if-empty` is just `-r` on Mac/BSD (and ignored as it is the default); see https://stackoverflow.com/a/8296746/5219886. – Daniel Mar 29 '22 at 03:28
2

It might be that the older docker images, volumes, etc. are still stuck in your EBS storage. From the docker docs:

Docker takes a conservative approach to cleaning up unused objects (often referred to as “garbage collection”), such as images, containers, volumes, and networks: these objects are generally not removed unless you explicitly ask Docker to do so. This can cause Docker to use extra disk space.

SSH into your EC2 instance and verify that the space is actually taken up:

ssh ec2-user@<public-ip>
df -h

Then you can prune the old images out:

docker system prune

Read the warning message from this command!

You can also prune the volumens. Do this if you're not storing files locally (which you shouldn't be anyway, they should be in something like AWS S3)

Use with Caution:

docker system prune --volumes
Nitin Nain
  • 5,113
  • 1
  • 37
  • 51