356

I have a disk drive where the inode usage is 100% (using df -i command). However after deleting files substantially, the usage remains 100%.

What's the correct way to do it then?

How is it possible that a disk drive with less disk space usage can have higher Inode usage than disk drive with higher disk space usage?

Is it possible if I zip lot of files would that reduce the used inode count?

neversaint
  • 60,904
  • 137
  • 310
  • 477
  • 9
    Want to give you 50 points for this question. How can I do! :) – Sophy Feb 18 '16 at 09:22
  • @Sophy Don't do that. you'll get auto-banned – Steven Lu Oct 15 '16 at 22:06
  • 1
    @StevenLu Thank you for your info! I want to give credit to him because i spent a few days to solve my issue. But this issue can help me. Thank again, – Sophy Oct 17 '16 at 08:43
  • 1
    @Sophy : why award something off-topic for SO? :) That's definitely not a programming question, no matter how many upvotes it gets. – tink Sep 17 '17 at 18:08
  • 2
    Empty directories also consume inodes. Deleting them can free up some inodes. The number can be significant in some use-cases. You can delete empty directories with: find . -type d -empty -delete – Ruchit Patel Jun 22 '18 at 09:02
  • 1
    Helpfull, but I'm voting to close this question as off-topic because this seem belong to https://unix.stackexchange.com/ – F. Hauri - Give Up GitHub Sep 02 '19 at 07:37
  • There is a similar question on the ServerFault SE: https://serverfault.com/questions/774715/100-inodes-in-root-directory-how-to-free-inodes – fiktor Jul 12 '20 at 17:05

21 Answers21

249

If you are very unlucky you have used about 100% of all inodes and can't create the scipt. You can check this with df -ih.

Then this bash command may help you:

sudo find . -xdev -type f | cut -d "/" -f 2 | sort | uniq -c | sort -n

And yes, this will take time, but you can locate the directory with the most files.

the Tin Man
  • 158,662
  • 42
  • 215
  • 303
simon
  • 3,378
  • 2
  • 22
  • 32
  • 12
    that does the trick. my problem was to have an incredible amount of sessions in the /lib/php/sessions directory. maybe somebody has the same problem – SteMa May 22 '12 at 14:51
  • Parallels plesk will not load, ftp not able to open a session, internet disk quota exceded (122) are some of the problems you'll get when you have reached the maximum number of inodes ( ~ Files) your service provider sets the max to as low as 20,000 inodes (~Files) even if you have UNLIMITED space. – normeus Jun 27 '12 at 14:49
  • 2
    Someone should rewrite this find, cut, uniq sort into a single awk command! – mogsie Oct 15 '12 at 17:14
  • 1
    Sometimes it also helps to try to locate directories that take lots of space. For example, if you have `mod_disk_cache` enabled with Apache default configuration, you'll find that each directory below `/var/cache/apache2/mod_disk_cache` only has sensible amount of entries but the whole hierarchy eats all your inodes. Running `du -hs *` may give hints about places that take more space than you're expecting. – Mikko Rantalainen Oct 19 '12 at 12:13
  • @mogsie, would `awk` be able to handle the potentially millions of lines that find would return? – alxndr Dec 05 '12 at 21:51
  • 9
    @alxndr `awk` could keep a hash of the directory and the count of files without uniqing and sorting a gazillion lines. That said, perhaps here's an improvement: `find . -maxdepth 1 -type d | grep -v '^\.$' | xargs -n 1 -i{} find {} -xdev -type f | cut -d "/" -f 2 | uniq -c | sort -n` — this only sorts the last list. – mogsie Mar 07 '13 at 13:05
  • 14
    If you cannot create any files, even that *can fail* because `sort` may fail to keep everything in the memory and will try to automatically fall back to writing a temporary file. A process which would obviously fail... – Mikko Rantalainen Mar 08 '13 at 07:59
  • 2
    Thanks for this, this totally helped me out. I had a small VM 'run out of space', but really it was the inodes. At first I went around cleaning out large files, but it wasn't helping, then I ran your script and found a directory with 60k little files in it. I got rid of them and now I'm back in business. Thanks! – J_McCaffrey Nov 05 '13 at 21:33
  • 12
    `sort` failed for me, but I was able to give `--buffer-size=10G` which worked. – Frederick Nord Aug 21 '14 at 15:42
  • 1
    @mogsie I used some gawk in [my version](http://stackoverflow.com/a/39003522/4414935). That also counts directories. – jarno Aug 17 '16 at 18:09
  • @mogsie here is a version of your script that counts also directories and handles filenames containing newlines: `find . -maxdepth 1 -not -path . -type d -print0 | xargs -0 -n 1 -I{} find {} -xdev -not -path {} -print0 | gawk 'BEGIN{RS="\0";FS="/";ORS="\0"}{print $2}' | uniq -cz | sort -nz`. The gawk command could be replaced by `grep -ozZ '\./[^/]*/'` (Tested by GNU grep 2.25) Unfortunately `cut` does not handle null terminated lines. – jarno Sep 07 '16 at 22:33
  • @FrederickNord, What's the error message when `sort` fails? How does it report failure? – Pacerier Nov 21 '17 at 16:26
  • @SteMa, Doesn't the directory self-cleanup? – Pacerier Nov 21 '17 at 16:27
  • Thank you, this told me where to look. Now I still had the issue that I couldn't delete the files, because I got "/bin/rm: Argument list too long", this could then be resolved with `for i in * ; do rm $i ; done`. – CodeMonkey Nov 17 '21 at 13:39
195

It's quite easy for a disk to have a large number of inodes used even if the disk is not very full.

An inode is allocated to a file so, if you have gazillions of files, all 1 byte each, you'll run out of inodes long before you run out of disk.

It's also possible that deleting files will not reduce the inode count if the files have multiple hard links. As I said, inodes belong to the file, not the directory entry. If a file has two directory entries linked to it, deleting one will not free the inode.

Additionally, you can delete a directory entry but, if a running process still has the file open, the inode won't be freed.

My initial advice would be to delete all the files you can, then reboot the box to ensure no processes are left holding the files open.

If you do that and you still have a problem, let us know.

By the way, if you're looking for the directories that contain lots of files, this script may help:

#!/bin/bash

# count_em - count files in all subdirectories under current directory.
echo 'echo $(ls -a "$1" | wc -l) $1' >/tmp/count_em_$$
chmod 700 /tmp/count_em_$$
find . -mount -type d -print0 | xargs -0 -n1 /tmp/count_em_$$ | sort -n
rm -f /tmp/count_em_$$
alper
  • 2,919
  • 9
  • 53
  • 102
paxdiablo
  • 854,327
  • 234
  • 1,573
  • 1,953
  • 16
    Of course, the `>/tmp/count_em_$$` will only work if you have space for it... if that's the case, see @simon's answer. – alxndr Dec 05 '12 at 21:52
  • 2
    @alxndr, that's why it's often a good idea to keep your file systems separate - that way, filling up something like `/tmp` won't affect your other file systems. – paxdiablo Dec 05 '12 at 23:09
  • Your answer is perfectly suitable for "system will not remain use the file after reboot if that was deleted". But the question was asked is "how to reclaim or reuse the inodes after inode pointer is deleted?". Basically linux kernel create a new inode to a file whenever created, and also automatically do not reclaim the inode whenever you deleting a file. – Mohanraj Apr 16 '13 at 09:51
  • @paxdiablo you said "My initial advice would be to delete all the files you can, then reboot the box to ensure no processes are left holding the files open." but its prod server so can't reboot so how to free those inodes without reboot – Ashish Karpe Jan 18 '16 at 04:44
  • 2
    @AshishKarpe, I assume you're talking about your *own* situation since the OP made no mention of production servers. If you can't reboot immedaitely then there are two possibilities. First, hope that the processes in flight eventually close the current files so disk resources can be freed up. Second, even production servers should have scope for rebooting at some point - simply schedule some planned downtime or wait for the next window of downtime to come up. – paxdiablo Jan 18 '16 at 05:12
  • Found lots of small files were been created in /tmp which was eating up inodes so freed them using cmd "find /tmp -type f -mmin +100 -name "*" | perl -nle 'unlink;'" ..........Thanks – Ashish Karpe Jan 18 '16 at 06:27
  • 1
    So grateful for this post. I had a RHEL 6.3 server that had all partitions free but thanks to the count_em script I was able to see the inodes in the /var partition were all used up, thanks to some weird cache files filling up /var/lib/sss/db directory . All my applications including auditd, lvm were screaming no space left. Now on to the REAL problems... :-( – Unpossible Mar 26 '16 at 16:27
  • 2
    I suppose you want `ls -A` instead of `ls -a`. Why would you want to count . and ..? – jarno Aug 17 '16 at 11:47
  • Many thanks guy.. Save me the date.. – Wirat Leenavonganan Jan 11 '22 at 07:21
77

My situation was that I was out of inodes and I had already deleted about everything I could.

$ df -i
Filesystem     Inodes  IUsed  IFree IUse% Mounted on
/dev/sda1      942080 507361     11  100% /

I am on an ubuntu 12.04LTS and could not remove the old linux kernels which took up about 400,000 inodes because apt was broken because of a missing package. And I couldn't install the new package because I was out of inodes so I was stuck.

I ended up deleting a few old linux kernels by hand to free up about 10,000 inodes

$ sudo rm -rf /usr/src/linux-headers-3.2.0-2*

This was enough to then let me install the missing package and fix my apt

$ sudo apt-get install linux-headers-3.2.0-76-generic-pae

and then remove the rest of the old linux kernels with apt

$ sudo apt-get autoremove

things are much better now

$ df -i
Filesystem     Inodes  IUsed  IFree IUse% Mounted on
/dev/sda1      942080 507361 434719   54% /
LNamba
  • 881
  • 6
  • 4
  • 4
    This was the closest to my own approach in a similar situation. It's worth noting that a more cautious approach is well documented at https://help.ubuntu.com/community/Lubuntu/Documentation/RemoveOldKernels – beldaz Jun 20 '16 at 01:00
  • 1
    My case exactly! But had to use "sudo apt-get autoremove -f" to progress – tonysepia Oct 20 '17 at 11:50
  • Is it safe to do this: `sudo rm -rf /usr/src/linux-headers-3.2.0-2*`, if I am sure I am not using that kernel? – Mars Lee Aug 28 '18 at 20:50
  • @MarsLee You can check which kernel is currently running with "uname -a" – Dominique Eav Aug 30 '18 at 07:44
  • 1
    Calling `$ sudo apt-get autoremove` alone, did the trick for me. – Morten Grum Sep 27 '19 at 09:18
66

My solution:

Try to find if this is an inodes problem with:

df -ih

Try to find root folders with large inodes count:

for i in /*; do echo $i; find $i |wc -l; done

Try to find specific folders:

for i in /src/*; do echo $i; find $i |wc -l; done

If this is linux headers, try to remove oldest with:

sudo apt-get autoremove linux-headers-3.13.0-24

Personally I moved them to a mounted folder (because for me last command failed) and installed the latest with:

sudo apt-get autoremove -f

This solved my problem.

the Tin Man
  • 158,662
  • 42
  • 215
  • 303
dardarlt
  • 1,199
  • 11
  • 12
  • 1
    In my case issue was `SpamAssasin-Temp`. `find /var/spool/MailScanner/incoming/SpamAssassin-Temp -mtime +1 -print | xargs rm -f` did the job :) Thanks! – joystick Jan 12 '16 at 10:36
  • 5
    For me, this was taking hours. However, there's a simple solution: When the second command hangs on a particular directory, kill the current command and restart changing /* to whatever directory it was hanging on. I was able to drill down to the culprit – Michael Terry Jun 02 '16 at 15:50
  • I used this variant of your command in order to print the numbers on the same line: `for i in /usr/src/*; do echo -en "$i\t"; find $i 2>/dev/null |wc -l; done` – cscracker Feb 04 '19 at 13:13
  • ```for i in /src/*; do echo "$i, `find $i |wc -l`"; done|sort -nrk 2|head -10``` show off top 10 largest directory – Mark Simon Jul 11 '19 at 05:14
  • it works, thanks for save my time. – Bill.Zhuang Mar 24 '22 at 11:38
15

I had the same problem, fixed it by removing the directory sessions of php

rm -rf /var/lib/php/sessions/

It may be under /var/lib/php5 if you are using a older php version.

Recreate it with the following permission

mkdir /var/lib/php/sessions/ && chmod 1733 /var/lib/php/sessions/

Permission by default for directory on Debian showed drwx-wx-wt (1733)

Anyone_ph
  • 616
  • 6
  • 15
  • 1
    Any idea why this happens? – Sibidharan Jul 04 '17 at 13:49
  • 1
    @Sibidharan in my case it was because the PHP cron job to clear the old PHP sessions was not working. – grim Feb 06 '18 at 15:49
  • 4
    `rm -rf /var/lib/php/sessions/*` would probably be a better command - it won't remove the session directory, just its contents... Then you don't have to worry about recreating it – Shadow Jul 09 '18 at 08:10
  • I did not have php session but magento session issue, similar to this. Thanks for the direction. – Mohit Feb 21 '19 at 06:44
  • php sessions should not clear via cron jobs , set session.gc_maxlifetime in php.ini https://www.php.net/manual/en/session.configuration.php#ini.session.gc-maxlifetime –  Nov 01 '19 at 02:15
4

firstly, get the inode storage usage:

df -i

The next step is to find those files. For that, we can use a small script that will list the directories and the number of files on them.

for i in /*; do echo $i; find $i |wc -l; done

From the output, you can see the directory which uses a large number of files, then repeat this script for that directory like below. Repeat it until you see the suspected directory.

for i in /home/*; do echo $i; find $i |wc -l; done

When you find the suspected directory with large number of unwanted files. Just delete the unwanted files on that directory and free up some inode space by the following the command.

rm -rf /home/bad_user/directory_with_lots_of_empty_files

You have successfully solved the problem. Check the inode usage now with the df -i command again, you can see the difference like this.

df -i
ashique
  • 935
  • 2
  • 8
  • 26
3

You can use RSYNC to DELETE the large number of files

rsync -a --delete blanktest/ test/

Create blanktest folder with 0 files in it and command will sync your test folders with large number of files(I have deleted nearly 5M files using this method).

Thanks to http://www.slashroot.in/which-is-the-fastest-method-to-delete-files-in-linux

VIGNESH
  • 69
  • 1
  • 7
  • 1
    From what I can tell from the article/comments, this is faster than `rm *` for lots of files, due to expanding the wildcard and passing/processing each argument, but `rm test/` is fine for deleting a `test/` folder containing lots of files. – mwfearnley Dec 11 '18 at 16:20
  • Heads up, this works well, but make sure you set the permissions correctly on the blank directory! I didn't do this and inadvertently changed the permissions on my PHP sessions directory. Took two hours to figure out what I screwed up. – aecend Feb 21 '20 at 21:21
2

We experienced this on a HostGator account (who place inode limits on all their hosting) following a spam attack. It left vast numbers of queue records in /root/.cpanel/comet. If this happens and you find you have no free inodes, you can run this cpanel utility through shell:

/usr/local/cpanel/bin/purge_dead_comet_files
devgroop
  • 129
  • 4
2

Late answer: In my case, it was my session files under

/var/lib/php/sessions

that were using Inodes.
I was even unable to open my crontab or making a new directory let alone triggering the deletion operation. Since I use PHP, we have this guide where I copied the code from example 1 and set up a cronjob to execute that part of the code.

<?php
// Note: This script should be executed by the same user of web server 
process.

// Need active session to initialize session data storage access.
session_start();

// Executes GC immediately
session_gc();

// Clean up session ID created by session_gc()
session_destroy();
?>

If you're wondering how did I manage to open my crontab, then well, I deleted some sessions manually through CLI.

Hope this helps!

Han
  • 575
  • 8
  • 21
1

eaccelerator could be causing the problem since it compiles PHP into blocks...I've had this problem with an Amazon AWS server on a site with heavy load. Free up Inodes by deleting the eaccelerator cache in /var/cache/eaccelerator if you continue to have issues.

rm -rf /var/cache/eaccelerator/*

(or whatever your cache dir)

supershwa
  • 41
  • 1
1

We faced similar issue recently, In case if a process refers to a deleted file, the Inode shall not be released, so you need to check lsof /, and kill/ restart the process will release the inodes.

Correct me if am wrong here.

msp
  • 3,272
  • 7
  • 37
  • 49
Razal
  • 31
  • 4
1

As told before, filesystem may run out of inodes, if there are a lot of small files. I have provided some means to find directories that contain most files here.

Community
  • 1
  • 1
jarno
  • 787
  • 10
  • 21
1

In one of the above answers it was suggested that sessions was the cause of running out of inodes and in our case that is exactly what it was. To add to that answer though I would suggest to check the php.ini file and ensure session.gc_probability = 1 also session.gc_divisor = 1000 and session.gc_maxlifetime = 1440. In our case session.gc_probability was equal to 0 and caused this issue.

Jared Forth
  • 1,577
  • 6
  • 17
  • 32
George W
  • 61
  • 1
  • 4
1

this article saved my day: https://bewilderedoctothorpe.net/2018/12/21/out-of-inodes/

find . -maxdepth 1 -type d | grep -v '^\.$' | xargs -n 1 -i{} find {} -xdev -type f | cut -d "/" -f 2 | uniq -c | sort -n
1

On Raspberry Pi I had a problem with /var/cache/fontconfig dir with large number of files. Removing it took more than hour. And of couse rm -rf *.cache* raised Argument list too long error. I used below one

find . -name '*.cache*' | xargs rm -f
itiic
  • 3,284
  • 4
  • 20
  • 31
0

you could see this info

for i in /var/run/*;do echo -n "$i "; find $i| wc -l;done | column -t
张馆长
  • 1,321
  • 10
  • 11
0

For those who use Docker and end up here,

When df -i says 100% Inode Use;

Just run docker rmi $(docker images -q)

It will let your created containers (running or exited) but will remove all image that ain't referenced anymore freeing a whole bunch of inodes; I went from 100% back to 18% !

Also might be worth mentioning I use a lot CI/CD with docker runner set up on this machine.

0

It could be the /tmp folder (where all the temporarily files are stored, yarn and npm script execution for exemple, specifically if you are starting a lot of node script). So normally, you just have to reboot your device or server, and it will delete all the temporarily file that you don't need. For my, I went from 100% of use to 23% of use !

MH info
  • 52
  • 8
-1

Many answers to this one so far and all of the above seem concrete. I think you'll be safe by using stat as you go along, but OS depending, you may get some inode errors creep up on you. So implementing your own stat call functionality using 64bit to avoid any overflow issues seems fairly compatible.

kinokaf
  • 39
  • 2
-1

Run sudo apt-get autoremove command in some cases it works. If previous unused header data exists, this will be cleaned up.

-2

If you use docker, remove all images. They used many space....

Stop all containers

docker stop $(docker ps -a -q)

Delete all containers

docker rm $(docker ps -a -q)

Delete all images

docker rmi $(docker images -q)

Works to me

Pankaj Shinde
  • 3,361
  • 2
  • 35
  • 44
  • 1
    This does not help to detect if "too many inodes" are the problem. – Mark Stosberg Jun 13 '19 at 18:28
  • This has nothing to do with Docker. – Urda Jul 12 '19 at 17:39
  • @Urda I have similar issue on VM Ubuntu 18.04 with 9 containers. After down all container (one throw a timeout), df -i returns 86%, after re-up 5 mains containers (used in production), then df -i again, it returns 13% ! – bcag2 Jan 12 '22 at 11:17
  • @Urda: Disagree. It might have to do with docker containers. The problem occured for me after moving docker to a smaller partition. After deleting the folder containing my docker images. The problem was solved (indoes count came down from 100% to 2%) – mcExchange Sep 29 '22 at 05:34
  • @mcExchange nope, this still has nothing to do with `docker`. Of course removing images frees space on your drive which you have incorrectly attributed to `docker` itself and not your lack of free space management. One more and final time: This has nothing to do with `docker`. – Urda Feb 06 '23 at 21:36