82

Using find . -print0 seems to be the only safe way of obtaining a list of files in bash due to the possibility of filenames containing spaces, newlines, quotation marks etc.

However, I'm having a hard time actually making find's output useful within bash or with other command line utilities. The only way I have managed to make use of the output is by piping it to perl, and changing perl's IFS to null:

find . -print0 | perl -e '$/="\0"; @files=<>; print $#files;'

This example prints the number of files found, avoiding the danger of newlines in filenames corrupting the count, as would occur with:

find . | wc -l

As most command line programs do not support null-delimited input, I figure the best thing would be to capture the output of find . -print0 in a bash array, like I have done in the perl snippet above, and then continue with the task, whatever it may be.

How can I do this?

This doesn't work:

find . -print0 | ( IFS=$'\0' ; array=( $( cat ) ) ; echo ${#array[@]} )

A much more general question might be: How can I do useful things with lists of files in bash?

Idris
  • 1,887
  • 1
  • 14
  • 9
  • What do you mean by doing useful things? – Balázs Pozsár Jul 12 '09 at 22:04
  • 5
    Oh, you know, the usual things arrays are useful for: finding out their size; iterating over their contents; printing them out backwards; sorting them. That kind of thing. There are a wealth of utilities in unix for doing these things with data: wc, bash's for-loops, tac and sort respectively; but these all seem useless when dealing with lists which might have spaces or newlines in them. I.e. filenames. Piping data around with null valued input-field-separators seems to be the solution, but very few utilities can handle this. – Idris Jul 12 '09 at 23:14
  • 1
    Here's an essay on how to properly handle filenames in shell, with lots of specifics: [http://www.dwheeler.com/essays/filenames-in-shell.html](http://www.dwheeler.com/essays/filenames-in-shell.html) – David A. Wheeler May 23 '10 at 16:54

13 Answers13

110

Shamelessly stolen from Greg's BashFAQ:

unset a i
while IFS= read -r -d $'\0' file; do
    a[i++]="$file"        # or however you want to process each file
done < <(find /tmp -type f -print0)

Note that the redirection construct used here (cmd1 < <(cmd2)) is similar to, but not quite the same as the more usual pipeline (cmd2 | cmd1) -- if the commands are shell builtins (e.g. while), the pipeline version executes them in subshells, and any variables they set (e.g. the array a) are lost when they exit. cmd1 < <(cmd2) only runs cmd2 in a subshell, so the array lives past its construction. Warning: this form of redirection is only available in bash, not even bash in sh-emulation mode; you must start your script with #!/bin/bash.

Also, because the file processing step (in this case, just a[i++]="$file", but you might want to do something fancier directly in the loop) has its input redirected, it cannot use any commands that might read from stdin. To avoid this limitation, I tend to use:

unset a i
while IFS= read -r -u3 -d $'\0' file; do
    a[i++]="$file"        # or however you want to process each file
done 3< <(find /tmp -type f -print0)

...which passes the file list via unit 3, rather than stdin.

Gordon Davisson
  • 118,432
  • 16
  • 123
  • 151
  • Ahhh almost there... this is the best answer yet. However, I've just tried it on a directory containing a file with a newline in its name, and upon inspecting that element using echo ${a[1]}, the newline seems to have become a space (0x20). Any idea why this is happening? – Idris Jul 14 '09 at 00:40
  • What version of bash are you running? I've had trouble with older versions (unfortunately I don't remember precisely which) not dealing with newlines and deletes (`\177`) in strings. IIRC, even x="$y" wouldn't always work right with these characters. I just tested with bash 2.05b.0 and 3.2.17 (the oldest and newest I have handy); both handled newlines properly, but v2.05b.0 ate the delete character. – Gordon Davisson Jul 14 '09 at 07:34
  • I've tried it on 3.2.17 on osx, 3.2.39 on linux and 3.2.48 on netBSD; all turn newline into space. – Idris Jul 14 '09 at 13:09
  • Very strange; I was testing 2.05b.0 and 3.2.17 under OS X, and I just tried 3.2.0 under NetBSD; all worked (except v2.05b.0 eating delete). How're you checking the contents of the array? Try `ls -Bd "${a[@]}"`; that should display the newline as `\012`, or give a "No such file or directory" error if it gets mangled in any way. – Gordon Davisson Jul 14 '09 at 15:58
  • Ah, yes, I was just echoing it. echo "${a[2]}" works, but not if the quotation marks are absent. That means you win the prize! – Idris Jul 14 '09 at 22:08
  • 15
    `-d ''` is equivalent to `-d $'\0'`. – l0b0 Oct 31 '11 at 13:38
  • 16
    An easier way to add an element to the end of an array is: `arr+=("$file")` – dogbane Oct 02 '12 at 10:06
  • The warning on using #!/bin/bash (versus bash in sh-emulation mode for the redirection) was very helpful. – kenj Nov 16 '13 at 20:15
  • @GordonDavisson you must not rely on the static file-descriptor "3" as it may be already used. Instead you can allocate the next free file-descriptor as explained here: http://stackoverflow.com/a/17030546/198219 Here is a combined example: https://blog.famzah.net/2016/10/20/bash-process-null-terminated-results-piped-from-external-commands/ – famzah Oct 20 '16 at 09:23
  • 1
    @CMCDragonkai: `readarray` was added in bash version 4, which was barely out when I wrote this answer. And some OSes (*cough* [macOS](http://meta.ath0.com/2012/02/05/apples-great-gpl-purge/) *cough*) still use bash v3, so it still isn't safe to assume `readarray` is available. As a result, I haven't actually done the work needed to figure out the possible gotchas with `readarray` and how to avoid them. If I get around to it, I'll update the answer. – Gordon Davisson Jan 06 '17 at 18:52
15

Since Bash 4.4, the builtin mapfile has the -d switch (to specify a delimiter, similar to the -d switch of the read statement), and the delimiter can be the null byte. Hence, a nice answer to the question in the title

Capturing output of find . -print0 into a bash array

is:

mapfile -d '' ary < <(find . -print0)
gniourf_gniourf
  • 44,650
  • 9
  • 93
  • 104
  • 3
    That looks much more elegant and worked like a charm for locate, too: `mapfile -d '' list < <(locate -b -0 -r "$1$")`. – user unknown Feb 07 '21 at 16:44
  • This answer is correct and elegant, though I made the mistake of re-ordering the arguments to mapfile: `mapfile ary -d ''` does *not* do the same thing. – Jonathan Mayer Jan 10 '23 at 00:41
6

Maybe you are looking for xargs:

find . -print0 | xargs -r0 do_something_useful

The option -L 1 could be useful for you too, which makes xargs exec do_something_useful with only 1 file argument.

Balázs Pozsár
  • 1,679
  • 14
  • 11
  • 4
    This isn't quite what I was after, because there is no opportunity to do array-like things with the list, such as sorting: you must use each element as and when it appears out of the find command. If you could elaborate on this example, with the "do_something_useful" part being a bash array-push operation, then this might be what I'm after. – Idris Jul 13 '09 at 12:33
6

The main problem is, that the delimiter NUL (\0) is useless here, because it isn't possible to assign IFS a NUL-value. So as good programmers we take care, that the input for our program is something it is able to handle.

First we create a little program, which does this part for us:

#!/bin/bash
printf "%s" "$@" | base64

...and call it base64str (don't forget chmod +x)

Second we can now use a simple and straightforward for-loop:

for i in `find -type f -exec base64str '{}' \;`
do 
  file="`echo -n "$i" | base64 -d`"
  # do something with file
done

So the trick is, that a base64-string has no sign which causes trouble for bash - of course a xxd or something similar can also do the job.

zstegi
  • 117
  • 1
  • 2
  • 1
    One must ensure that the part of the filesystem that find is processing does not change from when find is invoked until when the script completes. If this is not the case, a race condition results, which can be exploited to invoke commands on the wrong files. For instance a directory to be deleted (say /tmp/junk) could be replaced by a symlink to /home by an unprivaliged user. If the find command was running as root, and it was find -type d -exec rm -rf '{}' \;, this would delete all users' home folders. – Demi Sep 11 '13 at 23:23
  • 3
    `read -r -d ''` will read everything up to the next NUL into `"$REPLY"`. There's no need to care about `IFS`. – Charles Duffy Mar 19 '14 at 22:05
4

Yet another way of counting files:

find /DIR -type f -print0 | tr -dc '\0' | wc -c 
1

I think more elegant solutions exists, but I'll toss this one in. This will also work for filenames with spaces and/or newlines:

i=0;
for f in *; do
  array[$i]="$f"
  ((i++))
done

You can then e.g. list the files one by one (in this case in reverse order):

for ((i = $i - 1; i >= 0; i--)); do
  ls -al "${array[$i]}"
done

This page gives a nice example, and for more see Chapter 26 in the Advanced Bash-Scripting Guide.

Stephan202
  • 59,965
  • 13
  • 127
  • 133
  • This (and other similar examples below) is almost what I'm after - but with a big problem: it only works for globs of the current directory. I would like to be able to manipulate completely arbitrary lists of files; the output of "find" for example, which lists directories recursively, or any other list. What if my list was: ( /tmp/foo.jpg | /home/alice/bar.jpg | /home/bob/my holiday/baz.jpg | /tmp/new\nline/grault.jpg ), or any other totally arbitrary list of files (of course, potentially with spaces and newlines in them)? – Idris Jul 13 '09 at 12:40
1

You can safely do the count with this:

find . -exec echo ';' | wc -l

(It prints a newline for every file/dir found, and then count the newlines printed out...)

Balázs Pozsár
  • 1,679
  • 14
  • 11
  • 2
    It is much faster to use the `-printf` option instead of `-exec` for every file: `find . -printf "\n" | wc -l` – Oliver I Oct 11 '19 at 15:42
1

Avoid xargs if you can:

man ruby | less -p 777 
IFS=$'\777' 
#array=( $(find ~ -maxdepth 1 -type f -exec printf "%s\777" '{}' \; 2>/dev/null) ) 
array=( $(find ~ -maxdepth 1 -type f -exec printf "%s\777" '{}' + 2>/dev/null) ) 
echo ${#array[@]} 
printf "%s\n" "${array[@]}" | nl 
echo "${array[0]}" 
IFS=$' \t\n' 
1

I am new but I believe that this an answer; hope it helps someone:

STYLE="$HOME/.fluxbox/styles/"

declare -a array1

LISTING=`find $HOME/.fluxbox/styles/ -print0 -maxdepth 1 -type f`


echo $LISTING
array1=( `echo $LISTING`)
TAR_SOURCE=`echo ${array1[@]}`

#tar czvf ~/FluxieStyles.tgz $TAR_SOURCE
1

Old question, but no-one suggested this simple method, so I thought I would. Granted if your filenames have an ETX, this doesn't solve your problem, but I suspect it serves for any real-world scenario. Trying to use null seems to run afoul of default IFS handling rules. Season to your tastes with find options and error handling.

savedFS="$IFS"
IFS=$'\x3'
filenames=(`find wherever -printf %p$'\x3'`)
IFS="$savedFS"
oHo
  • 51,447
  • 27
  • 165
  • 200
1

Gordon Davisson's answer is great for bash. However a useful shortcut exist for zsh users:

First, place you string in a variable:

A="$(find /tmp -type f -print0)"

Next, split this variable and store it in an array:

B=( ${(s/^@/)A} )

There is a trick: ^@ is the NUL character. To do it, you have to type Ctrl+V followed by Ctrl+@.

You can check each entry of $B contains right value:

for i in "$B[@]"; echo \"$i\"

Careful readers may notice that call to find command may be avoided in most cases using ** syntax. For example:

B=( /tmp/** )
Jérôme Pouiller
  • 9,249
  • 5
  • 39
  • 47
0

This is similar to Stephan202's version, but the files (and directories) are put into an array all at once. The for loop here is just to "do useful things":

files=(*)                        # put files in current directory into an array
i=0
for file in "${files[@]}"
do
    echo "File ${i}: ${file}"    # do something useful 
    let i++
done

To get a count:

echo ${#files[@]}
Dennis Williamson
  • 346,391
  • 90
  • 374
  • 439
-3

Bash has never been good at handling filenames (or any text really) because it uses spaces as a list delimiter.

I'd recommend using python with the sh library instead.

Timmmm
  • 88,195
  • 71
  • 364
  • 509