I just built an application with expressJs for an institution where they upload video tutorials. At first the videos were being uploaded to the same server but later I switched to Amazon. I mean only the videos are being uploaded to Amazon. Now I get this error whenever I try to upload ENOSPC no space left on device. I have cleared the tmp file to no avail.I need to say that I have searched extensively about this issue but none of d solutions seems to work for me
17 Answers
Just need to clean up the Docker system in order to tackle it. Worked for me.
$ docker system prune
Link to official docs

- 5,676
- 4
- 34
- 42

- 2,025
- 3
- 12
- 20
-
This also worked for me, do you know why? – Carlos Morales Apr 21 '21 at 07:05
-
Probably because someone who used your computer or you yourself opened a docker at some point and forgot to close it. Or maybe some app uses a docker and forgot to close it. Whatever the cause is it sure freed a lot of space for me : ) – Trake Vital Jun 01 '21 at 06:07
-
7I would suggest using `docker system prune --all` to free even more space – Trake Vital Jun 01 '21 at 06:10
-
1This error happens when there's not enough space available on the disk. If you are using Docker and building the containers from the same machine without using the docker hub, then probably the disk becomes eventually full because of the orphan/unused containers the images you have been pulling and building. So this command simply helps us to prune the unused images and containers. Thereby we can free up the space. After we free up the space, the Node.js app will have enough space to run. – Renjith Oct 12 '21 at 04:45
In my case, I got the error 'npm WARN tar ENOSPC: no space left on device' while running the nodeJS in docker, I just used below command to reclaim space.
sudo docker system prune -af

- 989
- 9
- 7
-
2
-
12a=Remove all unused images not just dangling ones, f=Do not prompt for confirmation – Sovattha Sok Feb 05 '20 at 09:00
-
I had the same problem, take a look at the selected answer in the Stackoverflow here:
Here is the command that I used (my OS: LinuxMint 18.3 Sylvia which is a Ubuntu/Debian based Linux system).
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p

- 4,765
- 1
- 23
- 21
-
3In Docker based environments, doing this in the host works for all containers. – Yohan Liyanage Apr 18 '19 at 08:20
-
I'm in a Docker-based environment as well. How exactly do I do this in the host @YohanLiyanage ? – him229 Jun 25 '20 at 18:38
-
1
-
I see this helped a lot of people but it is not helping me. even `df -h` doesn't show any disk utilization of more than 50%. Do I need to restart anything after running this command? – mohit sharma Aug 26 '21 at 13:21
-
1
I have come across a similar situation where the disk is free but the system is not able to create new files. I am using forever for running my node app. Forever need to open a file to keep track of node process it's running.
If you’ve got free available storage space on your system but keep getting error messages such as “No space left on device”; you’re likely facing issues with not having sufficient space left in your inode table.
use df -i
which gives IUser%
like this
Filesystem Inodes IUsed IFree IUse% Mounted on
udev 992637 537 992100 1% /dev
tmpfs 998601 1023 997578 1% /run
If your IUser%
reaches 100%
means your "inode table" is exhausted
Identify dummy files or unnecessary files in the system and deleted them

- 1
- 1

- 141
- 1
- 2
-
Hi narasimhanaidu, please remember to enclose your code in backticks (`myVar = 1 + 1`) or with spacing so that it's clear what is description and what is code. – Fons MA Nov 06 '19 at 07:02
I got this error when my script was trying to create a new file. It may look like you've got lots of space on the disk, but if you've got millions of tiny files on the disk then you could have used up all the available inodes. Run df -hi
to see how many inodes are free.

- 3,525
- 2
- 34
- 50
I had the same problem, you can clear the trash if you haven't already, worked for me:
(The command I searched from a forum, so read about it before you decide to use it, I'm a beginner and just copied it, I don't know the full scope of what it does exactly)
$ rm -rf ~/.local/share/Trash/*
The command is from this forum:
https://askubuntu.com/questions/468721/how-can-i-empty-the-trash-using-terminal

- 311
- 3
- 10
Well in my own case. What actually happened was while the files were been uploaded on Amazon web service, I wasn't deleting the files from the temp folder. Well every developer knows that when uploading files to a server they are initially stored in the temp folder before being copied to whichever folder you want it to(I know for Nodejs and php); So try and delete your temp folder and see. And ensure ur upload method handles clearing of your temp folder immediately after every upload

- 1,741
- 4
- 15
- 23
-
or just make sure there is space left on your device if you get an error saying "no space left on device" – Phil Mar 09 '21 at 16:05
-
Yes, this is common mistake developers do, forgot to clear the files from temp after uploading files. Eventually, the disk space becomes full and the machine ran out of space. This also happens when we implemented a logging service and forgot to monitor the disk usage. The logs grow and it leads to the machine ran out of space. So don't forget to "log rotation" when implementing a logging service. – Renjith Oct 12 '21 at 04:53
-
You can set a new limit temporary with:
sudo sysctl fs.inotify.max_user_watches=524288
sudo sysctl -p
If you like to make your limit permanent, use:
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf
sudo sysctl -p

- 2,975
- 1
- 14
- 9
-
1macOS : sysctl: unknown oid 'fs.inotify.max_user_watches' and -p flag does not exist – Cyril Duchon-Doris Aug 03 '21 at 10:59
Adding to the discussion, the above command works even when the program is not run from Docker.
Repeating that command:
sudo sysctl fs.inotify.max_user_watches=524288
docker system prune

- 394
- 3
- 10
- Open
Docker Desktop
- Go to
Troubleshoot
- Click
Reset to factory defaults

- 8,508
- 6
- 68
- 94
-
Thank you Daniel, after spending half of my afternoon on this, you saved me the second half... ps: Even after `sudo docker system prune -af` I still had the error, only this solution worked for me. – olivier dumas Oct 19 '22 at 15:55
The previous answers fixed my problem for a short period of time.
I had to do find the big files that weren't being used and were filling my disk.
on the host computer I run: df
I got this, my problem was: /dev/nvme0n1p3
Filesystem 1K-blocks Used Available Use% Mounted on
udev 32790508 0 32790508 0% /dev
tmpfs 6563764 239412 6324352 4% /run
/dev/nvme0n1p3 978611404 928877724 0 100% /
tmpfs 32818816 196812 32622004 1% /dev/shm
tmpfs 5120 4 5116 1% /run/lock
tmpfs 32818816 0 32818816 0% /sys/fs/cgroup
/dev/nvme0n1p1 610304 28728 581576 5% /boot/efi
tmpfs 6563764 44 6563720 1% /run/user/1000
I installed ncdu and run it against root directory, you may need to manually delete an small file to make space for ncdu, if that's is not possible, you can use df
to find the files manually:
sudo apt-get install ncdu
sudo ncdu /
that helped me to identify the files, in my case those files were in the /tmp folder, then I used this command to delete the ones that weren't used in the last 10 days: With this app I was able to identify the big files and delete tmp files: (Sep-4 12:26)
sudo find /tmp -type f -atime +10 -delete

- 11
- 2
-
Yes!!! That -df was a huge help! The above answers didn't work for me, at least I have a chance now! Tysm! – Jen Feb 01 '21 at 22:47
tldr;
Restart Docker Desktop
The only thing that fixed this for me was quitting and restarting Docker Desktop.
I tried docker system prune
, removed as many volumes as I could safely do, removed all containers and many images and nothing worked until I quit and restarted Docker Desktop.
Before restarting Docker Desktop the system prune removed 2GB but after restarting it removed 12GB.
So, if you tried to run system prune and it didn't work, try restarting Docker and running the system prune again.
That's what I did and it worked. I can't say I understand why it worked.

- 2,113
- 20
- 25
The issue was actually as a result of temp folder not being cleared after upload, so all the videos that have been uploaded hitherto were still in the temp folder and the memory has been exhausted. The temp folder has been cleared now and everything works fine now.

- 1,741
- 4
- 15
- 23
I struggled hard with it, some time, following command worked.
docker system prune
But then I checked the volume and it was full. I inspected and came to know that node_modules have become the real trouble.
So, I deleted node_modules, ran again NPM install and it worked like charm.
Note:- This worked for me for NODEJS and REACTJS project.

- 315
- 3
- 11
In my case, Linux ext4 file system, large_dir feature should be enabled.
// check if it's enabled
sudo tune2fs -l /dev/sdc | grep large_dir
// enable it
sudo tune2fs -O large_dir /dev/sda
On Ubuntu, ext4 FS will have a 64M limit on number of files in a single directory by default, unless large_dir is enabled.

- 691
- 9
- 17
I used to check free space first using this command. to show show human-readable output
free -h
then i reclaimed more free space to almost
Total reclaimed space: 2.77GB from 0.94GB using this command
sudo docker system prune -af
this worked for me.

- 61
- 2
- 4