9

For following bash statement:

tail -Fn0 /tmp/report | while [ 1 ]; do echo "pre"; exit; echo "past"; done

I got "pre", but didn't quit to the bash prompt, then if I input something into /tmp/report, I could quit from this script and get into bash prompt.

I think that's reasonable. the 'exit' make the 'while' statement quit, but the 'tail' still alive. If something input into /tmp/report, the 'tail' will output to pipe, then 'tail' will detect the pipe is close, then 'tail' quits.

  1. Am I right? If not, would anyone provide a correct interpretation?
  2. Is it possible to add anything into 'while' statement to quit from the whole pipe statement immediately? I know I could save the pid of tail into a temporary file, then read this file in the 'while', then kill the tail. Is there a simpler way?
  3. Let me enlarge my question. If use this tail|while in a script file, is it possible to fulfill following items simultaneously? a. If Ctrl-C is inputed or signal the main shell process, the main shell and various subshells and background processes spawned by the main shell will quit b. I could quit from tail|while only at a trigger case, and preserve other subprocesses keep running c. It's better not use temporary file or pipe file.
Qiu Yangfan
  • 871
  • 11
  • 25
  • possible duplicate of [Cannot terminate a shell with Ctrl+c](http://stackoverflow.com/questions/20533745/cannot-terminate-a-shell-with-ctrlc) – shellter Dec 13 '13 at 04:34

2 Answers2

7

You're correct. The while loop is executing in a subshell because its input is redirected, and exit just exits from that subshell.

If you're running bash 4.x, you may be able to achieve what you want with a coprocess.

coproc TAIL { tail -Fn0 /tmp/report.txt ;}
while [ 1 ]
do
    echo "pre"
    break
    echo "past"
done <&${TAIL[0]}
kill $TAIL_PID

http://www.gnu.org/software/bash/manual/html_node/Coprocesses.html

With older versions, you can use a background process writing to a named pipe:

pipe=/tmp/tail.$$
mkfifo $pipe
tail -Fn0 /tmp/report.txt >$pipe &
TAIL_PID=$!
while [ 1 ]
do
    echo "pre"
    break
    echo "past"
done <$pipe
kill $TAIL_PID
rm $pipe
Barmar
  • 741,623
  • 53
  • 500
  • 612
  • Thanks for your help! Now my knowledge is growing. Well, my bash is 3.2.5. Perhaps yours is the best, while I'm still expecting some others. – Qiu Yangfan Dec 13 '13 at 03:15
  • You can emulate a coprocess using a background process writing to a named pipe. – Barmar Dec 13 '13 at 03:18
  • Added a solution using a named pipe. – Barmar Dec 13 '13 at 03:21
  • Barmar, thanks a lot for your second version. Actually, I have tried the named pipe before. While there's a problem, if there are too much logs getting into the /tmp/report.txt in a short while, and the action in 'while' (in my real case, I need some action in while statement) cannot process them quickly enough, then the tail will quit because of some pipe error, maybe it's a buffer issue of pipe, am I right? With the "tail | while" style, such issue could be avoided, as I tried. The while just handle the /tmp/report.txt one line after another. Please correct me if mistake. – Qiu Yangfan Dec 13 '13 at 04:42
  • if the pipe fills up, tail should just block waiting for the reader to get caught up. It shouldn't quit unless the reader process exits. – Barmar Dec 13 '13 at 04:49
  • OK, I just tried again, here's the result: In my error case: tail -Fn0 /tmp/report.txt >pipe & while read line – Qiu Yangfan Dec 13 '13 at 05:18
  • 1
    The writer of a pipe gets an error when the reader closes it. If you do the redirection inside the loop, it gets opened and closed each time through the loop, and that first close kills the writer. – Barmar Dec 13 '13 at 05:23
  • 1
    In the second version, the read end of the pipe is opened once for the entire loop, and only closed when the loop completes. – Barmar Dec 13 '13 at 05:24
2

You can (unreliably) get away with killing the process group:

tail -Fn0 /tmp/report | while :
do 
  echo "pre"
  sh -c 'PGID=$( ps -o pgid= $$ | tr -d \  ); kill -TERM -$PGID'
  echo "past"
done

This may send the signal to more processes than you want. If you run the above command in an interactive terminal you should be okay, but in a script it is entirely possible (indeed likely) the the process group will include the script running the command. To avoid sending the signal, it would be wise to enable monitoring and run the pipeline in the background to ensure that a new process group is formed for the pipeline:

#!/bin/sh

# In Posix shells that support the User Portability Utilities option
# this includes bash & ksh), executing "set -m" turns on job control. 
# Background processes run in a separate process group.  If the shell
# is interactive, a line containing their exit status is printed to
# stderr upon their completion.
set -m
tail -Fn0 /tmp/report | while :
do 
  echo "pre"
  sh -c 'PGID=$( ps -o pgid= $$ | tr -d \  ); kill -TERM -$PGID'
  echo "past"
done &
wait

Note that I've replaced the while [ 1 ] with while : because while [ 1 ] is poor style. (It behaves exactly the same as while [ 0 ]).

William Pursell
  • 204,365
  • 48
  • 270
  • 300
  • As I tested, the set -m is significant, while wether background or not seems the same, in both situation, the tail and while all in the same new process group. Would you explain more about the background? – Qiu Yangfan Dec 24 '13 at 07:47
  • The standard requires background pipelines to run as a separate process group if job control is enabled. Although a shell may run the foreground group as a different process group, it is not required. – William Pursell Dec 25 '13 at 04:45