7

I'm working on Linux and there is a folder, which contains lots of sub directories. I need to delete all of sub directories which have a same name. For example,

dir
 |---subdir1
 |---subdir2
 |     |-----subdir1
 |---file

I want to delete all of subdir1. Here is my script:

find dir -type d -name "subdir1" | while read directory ; do
    rm -rf $directory
done

However, I execute it but it seems that nothing happens.

I've tried also find dir -type d "subdir1" -delete, but still, nothing happens.

hek2mgl
  • 152,036
  • 28
  • 249
  • 266
Yves
  • 11,597
  • 17
  • 83
  • 180

4 Answers4

13

If find finds the correct directories at all, these should work:

find dir -type d -name "subdir1" -exec echo rm -rf {} \; 

or

find dir -type d -name "subdir1" -exec echo rm -rf {} +

(the echo is there for verifying the command hits the files you wanted, remove it to actually run the rm and remove the directories.)

Both piping to xargs and to while read have the downside that unusual file names will cause issues. Also, find -delete will only try to remove the directories themselves, not their contents. It will fail on any non-empty directories (but you should at least get errors).

With xargs, spaces separate words by default, so even file names with spaces will not work. read can deal with spaces, but in your command it's the unquoted expansion of $tar that splits the variable on spaces.

If your filenames don't have newlines or trailing spaces, this should work, too:

find ... | while read -r x ; do rm -rf "$x" ; done
ilkkachu
  • 6,221
  • 16
  • 30
  • I can see the messages coming from `echo` but all of `subdir1` are always there. Is it because of some format problems or some permission issues? The `dir` comes from Windows System and there are many unusual filenames such as names containing `&` – Yves May 09 '17 at 13:44
  • @Yves, and if you remove the echo to actually run the `rm` command, do you get any errors? `find -exec` shouldn't mind unusual filenames, as long as no other part of the system does. – ilkkachu May 09 '17 at 13:46
6

With the globstar option (enable with shopt -s globstar, requires Bash 4.0 or newer):

rm -rf **/subdir1/

The drawback of this solution as compared to using find -exec or find | xargs is that the argument list might become too long, but that would require quite a lot of directories named subdir1. On my system, ARG_MAX is 2097152.

Benjamin W.
  • 46,058
  • 19
  • 106
  • 116
2

Using xargs:

find dir -type d  -name "subdir1" -print0 |xargs -0 rm -rf

Some information not directly related to the question/problem:

find|xargs or find -exec

https://www.everythingcli.org/find-exec-vs-find-xargs/

Kent
  • 189,393
  • 32
  • 233
  • 301
  • 1
    Exactly what I was going to answer. But consider `find -print0` and `xargs -0` to account for filenames with spaces. – slim May 09 '17 at 13:37
  • It should definitely be `-exec rm -rf` instead of `xargs` (if you want to use `-f` at all) (Needed to vote here, sorry) – hek2mgl May 09 '17 at 13:37
  • @hek2mgl If you want to make an answer of the `-exec` option, go ahead (I will upvote). `xargs` works and is a pattern that can be reused in lots of circumstances. – slim May 09 '17 at 13:39
  • The problem is that `xargs` performs word splitting. – hek2mgl May 09 '17 at 13:40
  • @hek2mgl `print0` I agree, about xargs or exec: https://danielmiessler.com/blog/linux-xargs-vs-exec/ ;regarding the `-f` I saw OP had it, so I added too. – Kent May 09 '17 at 13:41
  • 1
    @Kent Looks like the guy who wrote the `xargs vs. exec` blog post is missing the opportunity to use `-exec ls {} +`. – hek2mgl May 09 '17 at 13:44
  • @Kent although in this case we know the subdir only occurs once, so `xargs`' big win isn't there. However, I think it's worth sticking to one idiom whenever you're "doing something with the stuff `find` finds". – slim May 09 '17 at 13:44
  • No, no, no, no, ... ! `xargs` introduces problems but doesn't solve problems here. It can be considered *wrong* – hek2mgl May 09 '17 at 13:46
  • @slim indeed, I always do `find|xargs` I knew there is `-exec` but never (really) used it. not because it is bad, it is easy for me to remember. :) – Kent May 09 '17 at 13:46
  • @hek2mgl if you meant the space in filename problem, I agree, `print0` and `-0` are required. I don't see other problems with xargs... pls tell if I am wrong, after all I didn't use `exec`, perhaps I missed something – Kent May 09 '17 at 13:48
  • `-exec ... +` is new to me. I think it may have backward-compatibility issues, but that might not matter to some people. `-print0 / -0` solves all the problems I know of with `xargs`. – slim May 09 '17 at 13:49
  • @hek2mgl some more info about exec and xargs... you are right `exec +` is fast too. but I would stick to `xargs`, it is fast and can combine with other commands. https://www.everythingcli.org/find-exec-vs-find-xargs/ – Kent May 09 '17 at 13:57
  • When `-print0` is used together with `xargs -0` the command is safe, that's true. (That was not used, in the moment I voted).. Anyhow why should I join by `\0`, pipe to an extra program, split by `\0` again instead of just using `-exec rm -r {} +`. Afar from the performance "issues", even the less keystrokes are a benefit of `-exec`. – hek2mgl May 09 '17 at 13:57
  • Anyhow, the command is not wrong any more. That's why I've removed my vote. – hek2mgl May 09 '17 at 13:58
  • 1
    @hek2mgl thank you. I put the link in answer. and learned `exec +` today, thanks to your comments. – Kent May 09 '17 at 14:01
  • 1
    @hek2mgl So I just found the GNU findutils changelog, and `-exec ... {} +` was added in 2005. That's a long time ago, but a while after I learned shell scripting :-O. Piping to xargs with `\0` is slightly *faster* according to my simple benchmark, which makes some sense because it slightly parallelizes things, but I can agree that `-exec ... {}` could be seen as cleaner. – slim May 09 '17 at 14:01
  • If there are many (many) files, piping output to a separate program to handle deletions (i.e. piping to `xargs -0 -n 1` or `while read -r -d $'\0'`) has the advantage of using a *separate process*, which could be advantageous in multi-CPU environments. Also, it doesn't depend on a super-long command line. If you were trying to match, say, millions of files, the single command line generated by `xargs` alone or `find ... -exec .. +` would barf. – ghoti May 09 '17 at 14:55
  • @ghoti First, thanks for teaching me the word `barf`. Never heard before but it is good to know :) .. About the topic: `+`, or `xargs` will just pass as many arguments as possible to avoid `too many arguments` errors. Are those errors what you meant with *barf*? – hek2mgl May 09 '17 at 18:08
  • PS: Imo if there are millions of files, two separate processes may be of importance: `find` and `rm`. Atm I don't see the advantage of the extra `xargs` process. (But I don't know how `find` is implemented under the hood. Also both `find` and `rm` would access the same filesystem. I don't understand the consequences of this (in Linux) completely atm but still don't see the usefulness of (good old) `xargs`). I personally see the advantage of `xargs` more when used with programs which don't support something like `-exec`. – hek2mgl May 09 '17 at 18:33
0

From the question, it seems you've tried to use while with find. The following substitution may help you:

while IFS= read -rd '' dir; do rm -rf "$dir"; done < <(find dir -type d -name "subdir" -print0)