3

How can I search for files in directories that contain spaces in names, using find? i use script

#!/bin/bash
for i in `find "/tmp/1/" -iname "*.txt" | sed 's/[0-9A-Za-z]*\.txt//g'`
do
    for j in `ls "$i" | grep sh | sed 's/\.txt//g'`
    do
        find "/tmp/2/" -iname "$j.sh" -exec cp {} "$i" \;
    done
done

but the files and directories that contain spaces in names are not processed?

Stoned_Fox
  • 61
  • 1
  • 4
  • 1
    The problem is not really with `find` but with the `for` loops since "spaces" are taken as delimiter between items. – Sylvain Leroux Aug 18 '14 at 12:20
  • 2
    What exactly are you trying to do here? – skamazin Aug 18 '14 at 12:22
  • @skamazin I have a .txt file in some directory. I want the script to search for files with the same name as that .txt file and copy them to the directory with the .txt file. And i need recursive search. – Stoned_Fox Aug 18 '14 at 12:46
  • @SerjAntiquity _"I want the script to search for files with the same name as that .txt file and copy them to the directory with the .txt file."_ If you copy a file with the _same name_ (incl. `.txt` ext.) you will overwrite it. Is that expected behavior? – Sylvain Leroux Aug 18 '14 at 12:52
  • @Sylvain Leroux, No, I copy a file with the same name, but with a different extension (.sh) – Stoned_Fox Aug 18 '14 at 12:55
  • @SerjAntiquity As of now, you have several answers trying to fix your probably sub-optimal solution. May I suggest you to rephrase what you have explaned in your previous comments, and then edit you question (or even post an other question) -- focusing on _what you are trying to do_. – Sylvain Leroux Aug 18 '14 at 12:57
  • @Sylvain Lerou Thx. ask a new question. – Stoned_Fox Aug 18 '14 at 12:59
  • possible duplicate of [Bash : iterate over list of files with spaces](http://stackoverflow.com/questions/7039130/bash-iterate-over-list-of-files-with-spaces) – Reinstate Monica Please Aug 18 '14 at 18:11

6 Answers6

6

This will grab all the files that have spaces in them

$ls
more space  nospace  stillnospace  this is space
$find -type f -name "* *"
./this is space
./more space
skamazin
  • 757
  • 5
  • 12
  • The filter only for regular file types is missing. The directory-type files will not be filtered in you example. – deimus Aug 18 '14 at 12:26
3

I don't know how to achieve you goal. But given your actual solution, the problem is not really with find but with the for loops since "spaces" are taken as delimiter between items.

find has a useful option for those cases:

from man find:

-print0

True; print the full file name on the standard output, followed by a null character (instead of the newline character that -print uses). This allows file names that contain newlines or other types of white space to be correctly interpreted by programs that process the find output. This option corresponds to the -0 option of xargs.

As the man saids, this will match with the -0 option of xargs. Several other standard tools have the equivalent option. You probably have to rewrite your complex pipeline around those tools in order to process cleanly file names containing spaces.

In addition, see bash "for in" looping on null delimited string variable to learn how to use for loop with 0-terminated arguments.

Community
  • 1
  • 1
Sylvain Leroux
  • 50,096
  • 7
  • 103
  • 125
1

Do it like this

find . -type f -name "* *"

Instead of . you can specify your path, where you want to find files with your criteria

deimus
  • 9,565
  • 12
  • 63
  • 107
0

Your first for loop is:

for i in `find "/tmp/1" -iname "*.txt" | sed 's/[0-9A-Za-z]*\.txt//g'`

If I understand it correctly, it is looking for all text files in the /tmp/1 directory, and then attempting to remove the file name with the sed command right? This would cause a single directory with multiple .txt files to be processed by the inner for loop more than once. Is that what you want?

Instead of using sed to get rid of the filename, you can use dirname instead. Also, later on, you use sed to get rid of the extension. You can use basename for that.

for i in `find "/tmp/1" -iname "*.txt"` ; do
  path=$(dirname "$i")
  for j in `ls $path | grep POD` ; do
    file=$(basename "$j" .txt)
    # Do what ever you want with the file

This doesn't solve the problem of having a single directory processed multiple times, but if it is an issue for you, you can use the for loop above to store the file name in an array instead and then remove duplicates with sort and uniq.

Trenin
  • 2,041
  • 1
  • 14
  • 20
  • I want the script to search for files with the same name as .txt and copy them to the directory in which the source is .txt – Stoned_Fox Aug 18 '14 at 12:32
0

Use while read loop with null-delimited pathname output from find:

#!/bin/bash
while IFS= read -rd '' i; do
    while IFS= read -rd '' j; do
        find "/tmp/2/" -iname "$j.sh" -exec echo cp '{}' "$i" \;
    done <(exec find "$i" -maxdepth 1 -mindepth 1 -name '*POD*' -not -name '*.txt' -printf '%f\0')
done <(exec find /tmp/1 -iname '*.txt' -not -iname '[0-9A-Za-z]*.txt' -print0)
konsolebox
  • 72,135
  • 12
  • 99
  • 105
0

Never used for i in $(find...) or similar as it'll fail for file names containing white space as you saw.

Use find ... | while IFS= read -r i instead.

It's hard to say without sample input and expected output but something like this might be what you need:

find "/tmp/1/" -iname "*.txt" |
while IFS= read -r i
do
    i="${i%%[0-9A-Za-z]*\.txt}"
    for j in "$i"/*sh*
    do
        j="${j%%\.txt}"
        find "/tmp/2/" -iname "$j.sh" -exec cp {} "$i" \;
    done
done

The above will still fail for file names that contains newlines. If you have that situation and can't fix the file names then look into the -print0 option for find, and piping it to xargs -0.

Ed Morton
  • 188,023
  • 17
  • 78
  • 185