77

I've a directory with many number of 0 byte files in it. I can't even see the files when I use the ls command. I'm using a small script to delete these files but sometimes that does not even delete these files. Here is the script:

i=100
while [ $i -le 999 ];do
    rm -f file${i}*;
    let i++;
done

Is there any other way to do this more quickly?

Brian Tompsett - 汤莱恩
  • 5,753
  • 72
  • 57
  • 129
small_ticket
  • 1,910
  • 5
  • 22
  • 30
  • 2
    If the files are 0 bytes and 'ls' does not show them, how do you know they are there? – JRT Jul 01 '10 at 12:12
  • I know because I was able to see them a few times, this 0 byte files occured several times and i don't know when and how but sometimes i could see them sometimes as now i could not see them. However i know the reason of the problem and it occured again, therefore i know they are in that directory – small_ticket Jul 01 '10 at 12:55
  • 1
    other than `while [ $i -le 999]` and `let i++`, you can also use `seq` with `for i in $(seq ...)` – YuppieNetworking Jul 01 '10 at 13:00

10 Answers10

148

Use find combined with xargs.

find . -name 'file*' -size 0 -print0 | xargs -0 rm

You avoid to start rm for every file.

Didier Trosset
  • 36,376
  • 13
  • 83
  • 122
  • 15
    +1 for `xargs`. Much better than `-exec`. Consider use `-print0` and `-0` for safety. – Martin Wickman Jul 01 '10 at 11:54
  • Thanks, i'm giving a shot for this one. I'll be posting the result. – small_ticket Jul 01 '10 at 12:56
  • 8
    `-exec` will start a new process with each argument. `xargs` won't. This is a great improvement in the number of process to start, and a great improvement in execution time. See `man xargs` for more info. – Didier Trosset Jul 01 '10 at 13:36
  • edit: Yes, it worked!...(my mistake) Great thanks, it is not that fast but it is handy – small_ticket Jul 02 '10 at 05:21
  • 4
    You want to use -size 0c. -size 0 will include files less than 512bytes. – tumtumtum Feb 14 '13 at 19:35
  • 2
    @tumtumtum It is true that `-size 0c` would be more correct (no unit specified default to number of blocks), but you're wrong in stating that `-size 0` will include files less than 512 bytes. Indeed, as soon as a file is 1 byte in size, it occupies 1 block. – Didier Trosset Feb 15 '13 at 11:13
  • 3
    This works for Ubuntu: `find . -type f -name '*' -size 0 -print0 | xargs -r0 rm` – seanbreeden Aug 03 '16 at 12:16
  • Warning: If you are new to `POSIX` based system, you should know device files in `POSIX` systems are 0 bytes in length. i.e. your keyboard is a device. – Kevin Genus Sep 19 '19 at 20:10
  • Should include `-type f` as seanbreeden suggests. otherwise can folders will be included and will abort the rm call prematurely – ctpenrose May 31 '20 at 03:50
95

With GNU's find (see comments), there is no need to use xargs :

find -name 'file*' -size 0 -delete
coredump
  • 37,664
  • 5
  • 43
  • 77
  • 3
    Nice - I didn't realize find had a delete action. – GreenMatt Jul 01 '10 at 16:43
  • 7
    Only in GNU find. POSIX does not specify actions like -delete and -ls – jim mcnamara Jul 01 '10 at 16:54
  • 8
    Note that obviously you don't need the `-name 'file*'` part if you don't filter by name. – Skippy le Grand Gourou Aug 03 '14 at 15:24
  • 8
    To just make the copy-and-pasteable solution of Skippys comment, just use `find . -size 0 -delete` – Colin D Jun 05 '17 at 17:47
  • How could I use is on a specific folder path, something like: `/home/user/file*` @coredump – alper Nov 27 '18 at 09:37
  • 7
    @alper `find` accepts a directory as its first argument, i.e. `find /home/user/ -name "file* ..."`. **Highly recommended: first use with `-print` instead of `-delete`, and then only when the result is satisfying, delete files**. – coredump Nov 27 '18 at 09:42
  • Why should I use `-print` before `-delete`? As I understand correctly, if `find ./ -type f -size 0 -print` returns valid output then I can do: `find ./ -type f -size 0 -delete` @coredump – alper Nov 27 '18 at 09:54
  • @alper Yes, first try with print (instead of delete), and if that works, you can delete safely (two separate commands) – coredump Nov 27 '18 at 10:19
13

If you want to find and remove all 0-byte files in a folder:

find /path/to/folder -size 0 -delete
Giangimgs
  • 952
  • 1
  • 12
  • 16
7
find . -maxdepth 1 -type f -size 0 -delete

This finds the files with size 0 in the current directory, without going into sub-directories, and deletes them.

To list the files without removing them:

find . -maxdepth 1 -type f -size 0
user7194913
  • 410
  • 4
  • 5
6

You can use the following command:

find . -maxdepth 1 -size 0c -exec rm {} \;

And if are looking to delete the 0 byte files in subdirectories as well, omit -maxdepth 1 in previous command and execute.

jitendra
  • 1,438
  • 2
  • 19
  • 40
4

Delete all files named file... in the current directory:

find . -name file* -maxdepth 1 -exec rm {} \;

This will still take a long time, as it starts rm for every file.

Sjoerd
  • 74,049
  • 16
  • 131
  • 175
  • 3
    I guess you should use double quotes: -name "file*" Otherwise the pattern will be expanded by the shell. – Philipp Jul 01 '10 at 11:49
  • 3
    This doesn't limit the `rm` to files with 0 bytes. To be fair, though, neither does the code the OP posted. – Nathan Fellman Jul 01 '10 at 12:06
  • 3
    You can use `+` instead of `;` to have `find` call `rm` with multiple arguments instead of invoking a process for each file. – Philipp Jul 01 '10 at 13:25
2

you can even use the option -delete which will delete the file.

from man find, -delete Delete files; true if removal succeeded.

thegeek
  • 2,388
  • 2
  • 13
  • 10
1

Here is an example, trying it yourself will help this to make sense:

bash-2.05b$ touch empty1 empty2 empty3
bash-2.05b$ cat > fileWithData1
Data Here
bash-2.05b$ ls -l
total 0
-rw-rw-r--    1 user group           0 Jul  1 12:51 empty1
-rw-rw-r--    1 user group           0 Jul  1 12:51 empty2
-rw-rw-r--    1 user group           0 Jul  1 12:51 empty3
-rw-rw-r--    1 user group          10 Jul  1 12:51 fileWithData1
bash-2.05b$ find . -size 0 -exec rm {} \;
bash-2.05b$ ls -l
total 0
-rw-rw-r--    1 user group          10 Jul  1 12:51 fileWithData1

If you have a look at the man page for find (type man find), you will see an array of powerful options for this command.

Noel M
  • 15,812
  • 8
  • 39
  • 47
1

"...sometimes that does not even delete these files" makes me think this might be something you do regularly. If so, this Perl script will remove any zero-byte regular files in your current directory. It avoids rm altogether by using a system call (unlink), and is quite fast.

#!/usr/bin/env perl
use warnings;
use strict;

my @files = glob "* .*";
for (@files) {
    next unless -e and -f;
    unlink if -z;
}
andereld
  • 86
  • 2
  • Hm, it works for me. It must have something to do with your other (Java/Selenium-related) problem. Either that, or the files you're trying to remove aren't regular files. I don't think the code is faulty. – andereld Jul 03 '10 at 20:10
0

Going up a level it's worth while to figure out why the files are there. You're just treating a symptom by deleting them. What if some program is using them to lock resources? If so your deleting them could be leading to corruption.

lsof is one way you might figure out which processes have a handle on the empty files.

Paul Rubel
  • 26,632
  • 7
  • 60
  • 80
  • The reason is why they are there is here: http://stackoverflow.com/questions/3157144/tomcat-creates-0-byte-files I'm also trying to solve that problem – small_ticket Jul 01 '10 at 13:36