393
x=$(find . -name "*.txt")
echo $x

if I run the above piece of code in Bash shell, what I get is a string containing several file names separated by blank, not a list.

Of course, I can further separate them by blank to get a list, but I'm sure there is a better way to do it.

So what is the best way to loop through the results of a find command?

Jahid
  • 21,542
  • 10
  • 90
  • 108
Haiyuan Zhang
  • 40,802
  • 41
  • 107
  • 134
  • 3
    The best way to loop over file names depends quite a bit on what you actually want to do with it, but unless you can *guarantee* no files have any whitespace in their name, this isn't a great way to do it. So what do you want to do in looping over the files? – Kevin Mar 08 '12 at 02:26
  • 1
    **Regarding the bounty**: the main idea here is to get a canonical answer that covers all the possible cases (filenames with new lines, problematic characters...). The idea is to then use these file names to do some stuff (call another command, perform some renaming...). Thanks! – fedorqui Jun 12 '15 at 08:08
  • Don't forget that a file or a folder name can contain ".txt" followed by space and another string, example "something.txt something" or "something.txt " – Yahya Yahyaoui Jun 17 '15 at 00:42
  • 2
    Use array, not var `x=( $(find . -name "*.txt") ); echo "${x[@]}"` Then you can loop through `for item in "${x[@]}"; { echo "$item"; }` – Ivan Jan 10 '20 at 07:43
  • @Ivan that is a very neat solution, but it does not work for me with whitesace in filename. – Kes Sep 09 '20 at 08:43
  • 1
    @Kes add this `IFS=$'\n' x=...` – Ivan Sep 09 '20 at 09:12
  • @Ivan thanks. This works with white space `IFS=$'\n' x=( $(find . -iname '*.doc') ); #echo "${x[@]}"; for item in "${x[@]}"; { echo " "; echo "command 1 here"; echo "$item"; echo "command 2 here"; }` People on here don't seem to like use of find though, for this situation, for various given reasons, but with `IFS=$'\n' x=...` this works. – Kes Sep 09 '20 at 09:35

17 Answers17

662

TL;DR: If you're just here for the most correct answer, you probably want my personal preference (see the bottom of this post):

# execute `process` once for each file
find . -name '*.txt' -exec process {} \;

If you have time, read through the rest to see several different ways and the problems with most of them.


The full answer:

The best way depends on what you want to do, but here are a few options. As long as no file or folder in the subtree has whitespace in its name, you can just loop over the files:

for i in $x; do # Not recommended, will break on whitespace
    process "$i"
done

Marginally better, cut out the temporary variable x:

for i in $(find -name \*.txt); do # Not recommended, will break on whitespace
    process "$i"
done

It is much better to glob when you can. White-space safe, for files in the current directory:

for i in *.txt; do # Whitespace-safe but not recursive.
    process "$i"
done

By enabling the globstar option, you can glob all matching files in this directory and all subdirectories:

# Make sure globstar is enabled
shopt -s globstar
for i in **/*.txt; do # Whitespace-safe and recursive
    process "$i"
done

In some cases, e.g. if the file names are already in a file, you may need to use read:

# IFS= makes sure it doesn't trim leading and trailing whitespace
# -r prevents interpretation of \ escapes.
while IFS= read -r line; do # Whitespace-safe EXCEPT newlines
    process "$line"
done < filename

read can be used safely in combination with find by setting the delimiter appropriately:

find . -name '*.txt' -print0 | 
    while IFS= read -r -d '' line; do 
        process "$line"
    done

For more complex searches, you will probably want to use find, either with its -exec option or with -print0 | xargs -0:

# execute `process` once for each file
find . -name \*.txt -exec process {} \;

# execute `process` once with all the files as arguments*:
find . -name \*.txt -exec process {} +

# using xargs*
find . -name \*.txt -print0 | xargs -0 process

# using xargs with arguments after each filename (implies one run per filename)
find . -name \*.txt -print0 | xargs -0 -I{} process {} argument

find can also cd into each file's directory before running a command by using -execdir instead of -exec, and can be made interactive (prompt before running the command for each file) using -ok instead of -exec (or -okdir instead of -execdir).

*: Technically, both find and xargs (by default) will run the command with as many arguments as they can fit on the command line, as many times as it takes to get through all the files. In practice, unless you have a very large number of files it won't matter, and if you exceed the length but need them all on the same command line, you're SOL find a different way.

Gabriel Staples
  • 36,492
  • 15
  • 194
  • 265
Kevin
  • 53,822
  • 15
  • 101
  • 132
  • 6
    It's worth noting that in the case with `done < filename` and the following one with the pipe the stdin can't be used any more (→ no more interactive stuff inside the loop), but in cases where it's needed one can use `3<` instead of `<` and add `<&3` or `-u3` to the `read` part, basically using a separate file descriptor. Also, I believe `read -d ''` is the same as `read -d $'\0'` but I can't find any official documentation on that right now. – phk Mar 13 '16 at 01:00
  • 2
    for i in *.txt; do does not work, if no files matching. One xtra test e.g. [[ -e $i ]] is needed – Michael Brux May 13 '16 at 07:20
  • 13
    I'm lost with this part: `-exec process {} \;` and my guess is that's a whole other question--what does that mean and how do I manipulate it? Where's a good Q/A or doc. on it? – Alex Hall Aug 20 '16 at 04:31
  • 4
    @AlexHall you can always look at the man pages (`man find`). In this case, `-exec` tells `find` to execute the following command, terminated by `;` (or `+`), wherein `{}` will be replaced by the name of the file it is processing (or, if `+` is used, all files that have made it to that condition). – Kevin Aug 20 '16 at 16:59
  • @Kevin In your second "marginally better" example, can you not double quote the search parameter so that it's `for i in $(find . -name "*.txt")` ? I've created a directory full of files names that contain white spaces. Everything seems to be working fine for me. How can I break it? I'm using Ubuntu 16.04 if that matters. – user658182 Jul 22 '17 at 15:48
  • @user658182 single quotes, double quotes, and the backslash will all work. Depending on your shell settings, `for i in *` should break. – Kevin Jul 22 '17 at 19:46
  • For a more correct answer limit find to only search for files with `-type f`. It is valid to have a directory with .txt ending eg. dir.txt – Uphill_ What '1 Mar 22 '18 at 07:30
  • 5
    @phk `-d ''` is better than `-d $'\0'`. The latter is not only longer but also suggests that you could pass arguments containing null bytes, but you cannot. The first null byte marks the end of the string. In bash `$'a\0bc'` is the same as `a` and `$'\0'` is the same as `$'\0abc'` or just the empty string `''`. `help read` states that "*The first character of delim is used to terminate the input*" so using `''` as a delimiter is a bit of a hack. The first character in the empty string is the null byte that *always* marks the end of the string (even if you don't explicitly write it down). – Socowi May 09 '19 at 22:10
  • My usecase is to find and execute scripts and I need to fail if one of them fails. Unfortunately, non-zero return values are ignored for "--exec" option. :-( – Alexander K Jul 07 '20 at 22:16
  • @AlexanderK try the `xargs` solution, depending on your version it should exit with an error if any of the child processes do. Though I don't think it just stops if one exits with an error. – Kevin Jul 07 '20 at 22:21
  • 1
    A better option for `read` would be using `done < <(find ...)`, as that allows environment variable changes to be visible outside the loop. It's the current recommended way on the shellcheck wiki. – Daniel C. Sobral Jun 01 '21 at 13:55
  • The globstar option is not whitespace-safe for me, unless I first set IFS= (). – jtbr Apr 20 '23 at 09:25
  • If you only want the **filenames** in find+while trick, you can use `-printf "%f\0"` instead of `-print0`. – user2959760 Jun 06 '23 at 12:09
  • how to use it for the if?? thank you – M. Mariscal Aug 09 '23 at 07:33
202

What ever you do, don't use a for loop:

# Don't do this
for file in $(find . -name "*.txt")
do
    …code using "$file"
done

Three reasons:

  • For the for loop to even start, the find must run to completion.
  • If a file name has any whitespace (including space, tab or newline) in it, it will be treated as two separate names.
  • Although now unlikely, you can overrun your command line buffer. Imagine if your command line buffer holds 32KB, and your for loop returns 40KB of text. That last 8KB will be dropped right off your for loop and you'll never know it.

Always use a while read construct:

find . -name "*.txt" -print0 | while read -d $'\0' file
do
    …code using "$file"
done

The loop will execute while the find command is executing. Plus, this command will work even if a file name is returned with whitespace in it. And, you won't overflow your command line buffer.

The -print0 will use the NULL as a file separator instead of a newline and the -d $'\0' will use NULL as the separator while reading.

Jonathan Leffler
  • 730,956
  • 141
  • 904
  • 1,278
David W.
  • 105,218
  • 39
  • 216
  • 337
  • 3
    It will not work with newlines in filenames. Use find's `-exec` instead. – user unknown Mar 08 '12 at 15:33
  • 3
    @userunknown - You're right about that . `-exec` is the safest since it doesn't use the shell at all. However, NL in file names is quite rare. Spaces in file names are quite common. The main point is not to use a `for` loop which many posters recommended. – David W. Mar 09 '12 at 03:53
  • A for will work in most cases, that's right, but you use 2 programs, where one would be sufficient. Find is already an iterator - so why should one output the result of find, just to collect it again? The only reason I know of (without doing an exhaustive research) is, if you like to sort the results of find, and process only the first/last result(s) or process something in order, maybe counting and enumerating something meanwhile. – user unknown Mar 09 '12 at 14:02
  • @userunknown - `find -exec` is the safest way, but doesn't work well with more than one command at a time. Plus, it can't use any environment variables or builtins since it doesn't use the shell. If I'm writing a quick shell script on the command line, using `find . -exec` is out. `for $var in $()` may work for **most** occasions, but it is much slower than `find | while read` because the `$()` command must finish before `for` can execute. – David W. Mar 13 '12 at 18:45
  • `find ... -exec ./adhoc.sh {} ";"` or `find -name "*.flv" -exec bash -c 'du {}; echo {}; md5sum {};' ";"` - both work fine for me. – user unknown Mar 13 '12 at 20:44
  • 1
    @userunknown - Here. I've fixed this, so it will now take care of files with new lines, tabs and any other white space. The whole point of the post is to tell the OP not to use the `for file $(find)` because of the problems associated with that. – David W. Jul 11 '13 at 17:09
  • 1
    I'm still not impressed. The main issue is, that find offers a convenient way to iterate over files and perform some actions. Your solutions look to me like a workaround for not knowing -exec (-execdir, -ok, -okdir, -delete). – user unknown Jul 11 '13 at 21:08
  • 6
    If you can use -exec it's better, but there are times when you really need the name given back to the shell. For instance if you want to remove file extensions. – Ben Reser Jan 03 '14 at 22:53
  • 2
    Actually, this *will* work with file names that contain new lines. That's the whole purpose of `-print0`. – David W. Jan 05 '14 at 14:57
  • 10
    You should use the `-r` option to `read`: `-r raw input - disables interpretion of backslash escapes and line-continuation in the read data` – Daira Hopwood Jan 17 '15 at 00:45
  • I touched a file named `"a\nb"` (\n being a real new line) and `find path/ -print0 | while read -r f; do echo "file: $f"; done` does not show it. I am curious about it (and also willing to award the bounty to the most generic and safer way to do this). – fedorqui Jun 18 '15 at 13:19
  • 5
    Note: This will put your scope into a subshell and you won't get all your variables. – Ryan Copley Sep 30 '16 at 17:49
  • A practical application is `find . -name "*.pdf" -or -name \*.png -print0 | while read -d $'\0' file; do du -h $file; done;`, which will give you the size of each file. – f0nzie May 25 '20 at 13:56
143
find . -name "*.txt"|while read fname; do
  echo "$fname"
done

Note: this method and the (second) method shown by bmargulies are safe to use with white space in the file/folder names.

In order to also have the - somewhat exotic - case of newlines in the file/folder names covered, you will have to resort to the -exec predicate of find like this:

find . -name '*.txt' -exec echo "{}" \;

The {} is the placeholder for the found item and the \; is used to terminate the -exec predicate.

And for the sake of completeness let me add another variant - you gotta love the *nix ways for their versatility:

find . -name '*.txt' -print0|xargs -0 -n 1 echo

This would separate the printed items with a \0 character that isn't allowed in any of the file systems in file or folder names, to my knowledge, and therefore should cover all bases. xargs picks them up one by one then ...

0xC0000022L
  • 20,597
  • 9
  • 86
  • 152
  • 3
    Fails if newline in filename. – user unknown Mar 08 '12 at 15:31
  • 3
    @user unknown: you are right, it's a case I hadn't considered at all and that, I think, is very exotic. But I adjusted my answer accordingly. – 0xC0000022L Mar 08 '12 at 15:43
  • Yes. The masking of {} is searching for shell, which would benefit from it. More precise: [I'm searching for such a shell](http://unix.stackexchange.com/q/8647/4485). Another thing which I don't understand is the work to pipe the result from find to xargs, while you can do it with find alone. – user unknown Mar 08 '12 at 16:00
  • `while read fname` is great because it doesn't force you in a sub-shell and quoting... But `while read -r fname` is even better, as it adds support for `\` in the input (input may be file contents, not file names) – Piotr Findeisen Mar 11 '14 at 23:49
  • 6
    Probably worth pointing out that `find -print0` and `xargs -0` are both GNU extensions and not portable (POSIX) arguments. Incredibly useful on those systems that have them, though! – Toby Speight Aug 04 '16 at 15:07
  • 1
    This also fails with filenames containing backslashes (which `read -r` would fix), or filenames ending in whitespace (which `IFS= read` would fix). Hence [BashFAQ #1](http://mywiki.wooledge.org/BashFAQ/001) suggesting `while IFS= read -r filename; do ...` – Charles Duffy Apr 16 '17 at 16:15
  • 4
    That said, when deciding whether to worry about filenames with literal newlines, keep in mind that it's not unheard of for an attacker to create a *deliberately* hard-to-delete file, or a name that injects unwanted arguments into a command run by a higher-privileged user. Consider than `$'/tmp/evil $\n/etc/passwd'`, for instance, would cause your code to not only skip iterating over `'/tmp/evil '`, but would also *add* `/etc/passwd` to the list of contents you iterate over. – Charles Duffy Apr 16 '17 at 16:16
  • 3
    Another problem with this is that it *looks* like the body of the loop is executing in the same shell, but it's not, so for example `exit` won't work as expected and variables set in the loop body won't be available after the loop. – EM0 Feb 08 '18 at 12:25
  • For anyone looking for the 1-liner of your main, first answer: `find . -name "*.txt"|while read fname; do echo "$fname"; done`. – Gabriel Staples Mar 06 '21 at 01:14
  • This is executing `while` loop in a subshell, for better solution look here: https://stackoverflow.com/a/74677358/14260647 – Celuk Dec 04 '22 at 14:09
33

Filenames can include spaces and even control characters. Spaces are (default) delimiters for shell expansion in bash and as a result of that x=$(find . -name "*.txt") from the question is not recommended at all. If find gets a filename with spaces e.g. "the file.txt" you will get 2 separated strings for processing, if you process x in a loop. You can improve this by changing delimiter (bash IFS Variable) e.g. to \r\n, but filenames can include control characters - so this is not a (completely) safe method.

From my point of view, there are 2 recommended (and safe) patterns for processing files:

1. Use for loop & filename expansion:

for file in ./*.txt; do
    [[ ! -e $file ]] && continue  # continue, if file does not exist
    # single filename is in $file
    echo "$file"
    # your code here
done

2. Use find-read-while & process substitution

while IFS= read -r -d '' file; do
    # single filename is in $file
    echo "$file"
    # your code here
done < <(find . -name "*.txt" -print0)

Remarks

on Pattern 1:

  1. bash returns the search pattern ("*.txt") if no matching file is found - so the extra line "continue, if file does not exist" is needed. see Bash Manual, Filename Expansion
  2. shell option nullglob can be used to avoid this extra line.
  3. "If the failglob shell option is set, and no matches are found, an error message is printed and the command is not executed." (from Bash Manual above)
  4. shell option globstar: "If set, the pattern ‘**’ used in a filename expansion context will match all files and zero or more directories and subdirectories. If the pattern is followed by a ‘/’, only directories and subdirectories match." see Bash Manual, Shopt Builtin
  5. other options for filename expansion: extglob, nocaseglob, dotglob & shell variable GLOBIGNORE

on Pattern 2:

  1. filenames can contain blanks, tabs, spaces, newlines, ... to process filenames in a safe way, find with -print0 is used: filename is printed with all control characters & terminated with NUL. see also Gnu Findutils Manpage, Unsafe File Name Handling, safe File Name Handling, unusual characters in filenames. See David A. Wheeler below for detailed discussion of this topic.

  2. There are some possible patterns to process find results in a while loop. Others (kevin, David W.) have shown how to do this using pipes:

    files_found=1
    find . -name "*.txt" -print0 | 
       while IFS= read -r -d '' file; do
           # single filename in $file
           echo "$file"
           files_found=0   # not working example
           # your code here
       done
    [[ $files_found -eq 0 ]] && echo "files found" || echo "no files found"
    

    When you try this piece of code, you will see, that it does not work: files_found is always "true" & the code will always echo "no files found". Reason is: each command of a pipeline is executed in a separate subshell, so the changed variable inside the loop (separate subshell) does not change the variable in the main shell script. This is why I recommend using process substitution as the "better", more useful, more general pattern.
    See I set variables in a loop that's in a pipeline. Why do they disappear... (from Greg's Bash FAQ) for a detailed discussion on this topic.

Additional References & Sources:

Benjamin W.
  • 46,058
  • 19
  • 106
  • 116
Michael Brux
  • 4,256
  • 1
  • 20
  • 21
  • 1
    Excellent info. Finally I found someone including an explanation and references when using process substitution in an answer. – rooby Aug 28 '21 at 07:20
12

(Updated to include @Socowi's execellent speed improvement)

With any $SHELL that supports it (dash/zsh/bash...):

find . -name "*.txt" -exec $SHELL -c '
    for i in "$@" ; do
        echo "$i"
    done
' {} +

Done.


Original answer (shorter, but slower):

find . -name "*.txt" -exec $SHELL -c '
    echo "$0"
' {} \;
user569825
  • 2,369
  • 1
  • 25
  • 45
  • 1
    Slow as molasses (since it launches a shell for each file) but this does work. +1 – dawg Sep 17 '17 at 00:09
  • 3
    Instead of `\;` you can use `+` to pass as many files as possibles to a single `exec`. Then use `"$@"` inside the shell script to process all these parameters. – Socowi May 09 '19 at 22:36
  • 4
    There is a bug in this code. The loop is missing the first result. That's because `$@` omits it since it is typically the name of the script. We just need to add `dummy` in between `'` and `{}` so it can take the place of the script name, ensuring all the matches are processed by the loop. – BCartolo Aug 05 '19 at 20:53
  • What if I need other variables from outside the newly created shell? – Jodo Nov 17 '19 at 21:47
  • `OTHERVAR=foo find . -na.....` should allow you to access `$OTHERVAR` from within that newly created shell. – user569825 Dec 31 '19 at 15:41
  • `for i in "$@"; do` is conventionally shortened to `for i; do`. – Konrad Rudolph Nov 24 '20 at 14:36
7

If you can assume the file names don't contain newlines, you can read the output of find into a Bash array using the following command:

readarray -t x < <(find . -name '*.txt')

Note:

  • -t causes readarray to strip newlines.
  • It won't work if readarray is in a pipe, hence the process substitution.
  • readarray is available since Bash 4.

Bash 4.4 and up also supports the -d parameter for specifying the delimiter. Using the null character, instead of newline, to delimit the file names works also in the rare case that the file names contain newlines:

readarray -d '' x < <(find . -name '*.txt' -print0)

readarray can also be invoked as mapfile with the same options.

Reference: https://mywiki.wooledge.org/BashFAQ/005#Loading_lines_from_a_file_or_stream

Socowi
  • 25,550
  • 3
  • 32
  • 54
Seppo Enarvi
  • 3,219
  • 3
  • 32
  • 25
  • This is the best answer! Works with: * Spaces in filenames * No matching files * `exit` when looping over the results – EM0 Feb 08 '18 at 12:37
  • Doesn't work with *all* possible filenames, though -- for that, you should use `readarray -d '' x < <(find . -name '*.txt' -print0)` – Charles Duffy Jan 31 '19 at 18:17
  • This solution worked also for me in the special case when directory didn't find any files. In that case you want an empty array instead of an array with one element containing an empty string. Thanks! – Jan Sep 12 '21 at 08:55
6
# Doesn't handle whitespace
for x in `find . -name "*.txt" -print`; do
  process_one $x
done

or

# Handles whitespace and newlines
find . -name "*.txt" -print0 | xargs -0 -n 1 process_one
0xC0000022L
  • 20,597
  • 9
  • 86
  • 152
bmargulies
  • 97,814
  • 39
  • 186
  • 310
  • 3
    `for x in $(find ...)` will break for any filename with whitespace in it. Same with `find ... | xargs` unless you use `-print0` and `-0` – glenn jackman Mar 08 '12 at 03:36
  • 1
    Use `find . -name "*.txt -exec process_one {} ";"` instead. Why should we use xargs to collect results, we already have? – user unknown Jul 11 '13 at 21:11
  • @userunknown Well that all depends on what `process_one` is. If it's a placeholder for an actual *command*, sure that would work (if you fix typo and add closing quotes after `"*.txt`). But if `process_one` is a user-defined function, your code won't work. – toxalot Mar 10 '14 at 01:18
  • @toxalot: Yes, but it wouldn*t be a problem to write the function in a script to call. – user unknown Mar 11 '14 at 05:13
5

I think using this piece of code (piping the command after while done):

while read fname; do
  echo "$fname"
done <<< "$(find . -name "*.txt")"

is better than this answer because while loop is executed in a subshell according to here, if you use this answer and variable changes cannot be seen after while loop if you want to modify variables inside the loop.

Celuk
  • 561
  • 7
  • 17
4

I like to use find which is first assigned to variable and IFS switched to new line as follow:

FilesFound=$(find . -name "*.txt")

IFSbkp="$IFS"
IFS=$'\n'
counter=1;
for file in $FilesFound; do
    echo "${counter}: ${file}"
    let counter++;
done
IFS="$IFSbkp"

As commented by @Konrad Rudolph this will not work with "new lines" in file name. I still think it is handy as it covers most of the cases when you need to loop over command output.

Paco
  • 131
  • 2
  • 5
  • 1
    This solution doesn’t always work (newline in filenames), and is no easier than correct solutions that work in all cases. – Konrad Rudolph Nov 24 '20 at 14:36
4

As already posted on the top answer by Kevin, the best solution is to use a for loop with bash glob, but as bash glob is not recursive by default, this can be fixed by a bash recursive function:

#!/bin/bash
set -x
set -eu -o pipefail

all_files=();

function get_all_the_files()
{
    directory="$1";
    for item in "$directory"/* "$directory"/.[^.]*;
    do
        if [[ -d "$item" ]];
        then
            get_all_the_files "$item";
        else
            all_files+=("$item");
        fi;
    done;
}

get_all_the_files "/tmp";

for file_path in "${all_files[@]}"
do
    printf 'My file is "%s"\n' "$file_path";
done;

Related questions:

  1. Bash loop through directory including hidden file
  2. Recursively list files from a given directory in Bash
  3. ls command: how can I get a recursive full-path listing, one line per file?
  4. List files recursively in Linux CLI with path relative to the current directory
  5. Recursively List all directories and files
  6. bash script, create array of all files in a directory
  7. How can I creates array that contains the names of all the files in a folder?
  8. How can I creates array that contains the names of all the files in a folder?
  9. How to get the list of files in a directory in a shell script?
Evandro Coan
  • 8,560
  • 11
  • 83
  • 144
3

You can put the filenames returned by find into an array like this:

array=()
while IFS=  read -r -d ''; do
    array+=("$REPLY")
done < <(find . -name '*.txt' -print0)

Now you can just loop through the array to access individual items and do whatever you want with them.

Note: It's white space safe.

Socowi
  • 25,550
  • 3
  • 32
  • 54
Jahid
  • 21,542
  • 10
  • 90
  • 108
  • 2
    With bash 4.4 or higher you could use a single command instead of a loop: `mapfile -t -d '' array < <(find ...)`. Setting `IFS` is not necessary for `mapfile`. – Socowi May 09 '19 at 22:20
3

based on other answers and comment of @phk, using fd #3:
(which still allows to use stdin inside the loop)

while IFS= read -r f <&3; do
    echo "$f"

done 3< <(find . -iname "*filename*")
Florian
  • 372
  • 2
  • 12
2

You can store your find output in array if you wish to use the output later as:

array=($(find . -name "*.txt"))

Now to print the each element in new line, you can either use for loop iterating to all the elements of array, or you can use printf statement.

for i in ${array[@]};do echo $i; done

or

printf '%s\n' "${array[@]}"

You can also use:

for file in "`find . -name "*.txt"`"; do echo "$file"; done

This will print each filename in newline

To only print the find output in list form, you can use either of the following:

find . -name "*.txt" -print 2>/dev/null

or

find . -name "*.txt" -print | grep -v 'Permission denied'

This will remove error messages and only give the filename as output in new line.

If you wish to do something with the filenames, storing it in array is good, else there is no need to consume that space and you can directly print the output from find.

Rakholiya Jenish
  • 3,165
  • 18
  • 28
0
function loop_through(){
        length_="$(find . -name '*.txt' | wc -l)"
        length_="${length_#"${length_%%[![:space:]]*}"}"
        length_="${length_%"${length_##*[![:space:]]}"}"   
        for i in {1..$length_}
        do
            x=$(find . -name '*.txt' | sort | head -$i | tail -1)
            echo $x
        done

}

To grab the length of the list of files for loop, I used the first command "wc -l".
That command is set to a variable.
Then, I need to remove the trailing white spaces from the variable so the for loop can read it.

S.Doe_Dude
  • 151
  • 1
  • 5
-1

find <path> -xdev -type f -name *.txt -exec ls -l {} \;

This will list the files and give details about attributes.

chetangb
  • 59
  • 5
-4

Another alternative is to not use bash, but call Python to do the heavy lifting. I recurred to this because bash solutions as my other answer were too slow.

With this solution, we build a bash array of files from inline Python script:

#!/bin/bash
set -eu -o pipefail

dsep=":"  # directory_separator
base_directory=/tmp

all_files=()
all_files_string="$(python3 -c '#!/usr/bin/env python3
import os
import sys

dsep="'"$dsep"'"
base_directory="'"$base_directory"'"

def log(*args, **kwargs):
    print(*args, file=sys.stderr, **kwargs)

def check_invalid_characther(file_path):
    for thing in ("\\", "\n"):
        if thing in file_path:
            raise RuntimeError(f"It is not allowed {thing} on \"{file_path}\"!")
def absolute_path_to_relative(base_directory, file_path):
    relative_path = os.path.commonprefix( [ base_directory, file_path ] )
    relative_path = os.path.normpath( file_path.replace( relative_path, "" ) )

    # if you use Windows Python, it accepts / instead of \\
    # if you have \ on your files names, rename them or comment this
    relative_path = relative_path.replace("\\", "/")
    if relative_path.startswith( "/" ):
        relative_path = relative_path[1:]
    return relative_path

for directory, directories, files in os.walk(base_directory):
    for file in files:
        local_file_path = os.path.join(directory, file)
        local_file_name = absolute_path_to_relative(base_directory, local_file_path)

        log(f"local_file_name {local_file_name}.")
        check_invalid_characther(local_file_name)
        print(f"{base_directory}{dsep}{local_file_name}")
' | dos2unix)";
if [[ -n "$all_files_string" ]];
then
    readarray -t temp <<< "$all_files_string";
    all_files+=("${temp[@]}");
fi;

for item in "${all_files[@]}";
do
    OLD_IFS="$IFS"; IFS="$dsep";
    read -r base_directory local_file_name <<< "$item"; IFS="$OLD_IFS";

    printf 'item "%s", base_directory "%s", local_file_name "%s".\n' \
            "$item" \
            "$base_directory" \
            "$local_file_name";
done;

Related:

  1. os.walk without hidden folders
  2. How to do a recursive sub-folder search and return files in a list?
  3. How to split a string into an array in Bash?
Evandro Coan
  • 8,560
  • 11
  • 83
  • 144
-5

How about if you use grep instead of find?

ls | grep .txt$ > out.txt

Now you can read this file and the filenames are in the form of a list.

Nathan Arthur
  • 8,287
  • 7
  • 55
  • 80
  • 8
    No, don't do this. [Why you shouldn't parse the output of ls](http://mywiki.wooledge.org/ParsingLs). This is fragile, very fragile. – fedorqui Jun 18 '15 at 09:47