42

Given a text file with multiple lines, I would like to iterate over each line in a Bash script. I had attempted to use cut, but cut does not accept \n (newline) as a delimiter.

This is an example of the file I am working with:

one
two 
three 
four

Does anyone know how I can loop through each line of this text file in Bash?

Andria
  • 4,712
  • 2
  • 22
  • 38
pedr0
  • 2,941
  • 6
  • 32
  • 46
  • This: https://stackoverflow.com/questions/13939038/how-do-you-run-a-command-eg-chmod-for-each-line-of-a-file/ also has relevant answers – user8395964 Feb 26 '23 at 04:08

10 Answers10

88

I found myself in the same problem, this works for me:

cat file.cut | cut -d$'\n' -f1

Or:

cut -d$'\n' -f1 file.cut
Ivan
  • 981
  • 1
  • 6
  • 3
  • 5
    why does this work? what does the dollar sign do in this case – Jules G.M. Nov 07 '18 at 20:51
  • 10
    @JulesG.M. it works because Bash has a feature called ANSI-C Quoting. The documentation mentions this: "Words of the form $'string' are treated specially. The word expands to string, with backslash-escaped characters replaced as specified by the ANSI C standard.". You can find the docs here: https://www.gnu.org/software/bash/manual/html_node/ANSI_002dC-Quoting.html – metator Mar 14 '19 at 20:47
33

Use cat for concatenating or displaying. No need for it here.

file="/path/to/file"
while read line; do
  echo "${line}"
done < "${file}"
Andria
  • 4,712
  • 2
  • 22
  • 38
Benoit
  • 76,634
  • 23
  • 210
  • 236
  • 1
    I'd agree with this, but this is noticably slower than the `cut` approach. – Stan Strum Jan 11 '18 at 06:15
  • @StanStrum This solution should be much quicker (`O(n)` time complexity) and more readable than the `cut` approach. The OP was asking how to iterate over every single line in a file. Using `cut`, one would have to either know the number of lines in the file and write that many statements using `cut` or write a while loop anyway giving you an `O(n + n²)` time complexity. – Andria Aug 16 '21 at 00:30
  • This solution is, however, not complete as there are some pitfalls and edge cases that would come up. – Andria Aug 16 '21 at 04:46
3

Simply use:

echo -n `cut ...`

This suppresses the \n at the end

Philipp Murry
  • 1,660
  • 9
  • 13
3
cat FILE|while read line; do # 'line' is the variable name
   echo "$line" # do something here
done

or (see comment):

while read line; do # 'line' is the variable name
   echo "$line" # do something here
done < FILE
0xC0000022L
  • 20,597
  • 9
  • 86
  • 152
  • 2
    [UUOC](http://en.wikipedia.org/wiki/Cat_%28Unix%29#Useless_use_of_cat) - use `while ... done < file` – Kevin Mar 06 '12 at 16:28
  • 1
    It's not useless, it's perhaps wasteful (after reading this paragraph), but it's also more readable IMO. – 0xC0000022L Mar 06 '12 at 16:34
2

So, some really good (possibly better) answers have been provided already. But looking at the phrasing of the original question, in wanting to use a BASH for-loop, it amazed me that nobody mentioned a solution with change of Field Separator IFS. It's a pure bash solution, just like the accepted read line

old_IFS=$IFS
IFS='\n'
for field in $(<filename)
do your_thing;
done
IFS=$old_IFS
DOK
  • 35
  • 4
2

If you are sure that the output will always be newline-delimited, use head -n 1 in lieu of cut -f1 (note that you mentioned a for loop in a script and your question was ultimately not script-related).

Many of the other answers, including the accepted one, have multiple lines unnecessarily. No need to do this over multiple lines or changing the default delimiter on the system.

Also, the solution provided by Ivan with -d$'\n' did not work for me either on Mac OSX or CentOS 7. Since his answer is four years old, I assume something must have changed on the logic of the $ character for this situation.

MisterStrickland
  • 947
  • 1
  • 15
  • 35
2

While loop with input redirection and read command.

You should not be using cut to perform a sequential iteration of each line in a file as cut was not designed to do this.

Print selected parts of lines from each FILE to standard output. — man cut

TL;DR

You should use a while loop with the read -r command and redirect standard input to your file inside a function scope where IFS is set to \n and use -E when using echo.

processFile() {          # Function scope to prevent overwriting IFS globally
  file="$1"              # Any file that exists
  local IFS="\n"         # Allows spaces and tabs
  while read -r line; do # Read exits with 1 when done; -r allows \
    echo -E "$line"      # -E allows printing of \ instead of gibberish
  done < $file           # Input redirection allows us to read file from stdin
}
processFile /path/to/file

Iteration

In order to iterate over each line of a file, we can use a while loop. This will let us iterate as many times as we need to.

while <condition>; do
  <body>
done

Getting our file ready to read

We can use the read command to store a single line from standard input in a variable. Before we can use that to read a line from our file, we need to redirect standard input to point to our file. We can do this with input redirection. According to the man pages for bash, the syntax for redirection is [fd]<file where fd defaults to standard input (a.k.a file descriptor 0). We can place this before or after our while loop.

while <condition>; do
  <body>
done < /path/to/file

# or the non-traditional way
</path/to/file while <condition>; do
  <body>
done

Reading the file and ending the loop

Now that our file can be read from standard input, we can use read. The syntax for read in our context is read [-r] var... where -r preserves the \ (backslash) character, instead of using it as an escape sequence character, and var is the name of the variable to store the input in. You can have multiple variables to store pieces of the input in but we only need one to read an entire line. Along with this, to preserve any backslashes in any output from echo you will likely need to use the -E flag to disable the interpretation of backslash escapes. If you have any indentation (spaces or tabs), you will need to temporarily change the IFS (Input Field Separators) variable to only "\n"; normally it is set to " \t\n".

main() {
  local IFS="\n"
  read -r line
  echo -E "$line"
}

main

How do we use read to end our while loop?

There is really only one reliable way, that I know of, to determine when you've finished reading a file with read: check the exit value of read. If the exit value of read is 0 then we successfully read a line, if it is 1 or higher then we reached EOF (end of file). With that in mind, we can place the call to read in our while loop's condition section.

processFile() {
  # Could be any file you want hardcoded or dynamic
  file="$1"

  local IFS="\n"
  while read -r line; do
    # Process line here
    echo -E "$line"
  done < $file
}

processFile /path/to/file1
processFile /path/to/file2

A visual breakdown of the above code via Explain Shell.

Andria
  • 4,712
  • 2
  • 22
  • 38
0

If I am executing a command and want to cut the output but it has multiple lines I found it helpful to do

echo $([command]) | cut [....]

This puts all the output of [command] on a single line that can be easier to process.

joanis
  • 10,635
  • 14
  • 30
  • 40
0

The easiest way to iterate the records in a file would be the following via for loop in bash.

#!/bin/bash
echo 'Showing the records in the file';
for i in `cat myfile.txt`; do echo $i; done;  # this will show records line by line
echo 'End of the script';
Du-Lacoste
  • 11,530
  • 2
  • 71
  • 51
-2

My opinion is that "cut" uses '\n' as its default delimiter. If you want to use cut, I have two ways:

    cut -d^M -f1 file_cut

I make ^M By click Enter After Ctrl+V. Another way is

    cut -c 1- file_cut

Does that help?

Umae
  • 445
  • 2
  • 5