2

I have a pipeline, say a|b where if a runs into a problem, I want to stop the whole pipeline.

'a exiting with exit=1 doesn't do this as often 'b doesn't care about return codes.

e.g.

echo 1|grep 0|echo $? <-- this shows that grep did exit=1 but echo 1|grep 0 | wc <--- wc is unfazed by grep's exit here

If I ran the pipeline as a subprocess of an owning process, any of the pipeline processes could kill the owning process. But this seems a bit clumsy -- but it would zap the whole pipeline.

user1202733
  • 623
  • 4
  • 10
  • It's hard to understand exactly what you're asking, but perhaps this helps? http://stackoverflow.com/a/19804002/2096752 – shx2 Feb 04 '16 at 10:56
  • Maybe like that? status_a=$(a) if [[ $? -eq 0 ]]; then b $a fi – Daniel Feb 04 '16 at 12:15
  • But that runs 'a to completion. I want 'b (and anything subsequent) to be underway. Further, if my pipeline is longer, say a|b|c|d|e|f I would have to repeat that logic at each stage. I want any stage in the pipeline to be able to stop every other stage. – user1202733 Feb 04 '16 at 13:58
  • 1
    Write set -e and set -o pipefail at the beginning of your file. – Daniel Feb 04 '16 at 14:21
  • -e will exit on an error and -o pipefail will produce an errorcode on each stage of you "pipeline" – Daniel Feb 04 '16 at 14:23
  • The duplicate seems to have a different interpretation than the way I read this one. To stop (i.e. kill) the pipeline as soon as one of its commands fails is different from converting it to sequential execution with temporary files. –  Feb 04 '16 at 15:25
  • If you run `a | b`, it is entirely possible that `b` will run to completion before `a` even starts. What behavior do you want when you say "stop the whole pipeline?" Typically, if `b` is actually reading its input, it won't do anything if `a` fails and does not generate any output, so the default behavior gives you what you seem to want. – William Pursell Feb 04 '16 at 16:52
  • Be careful with `echo 1|grep 0|echo $?`. The value in `$?` there has nothing to do with either of the processes in the pipeline. Consider `sh -c 'exit 35' | echo $?` – William Pursell Feb 04 '16 at 16:56

2 Answers2

1

Not possible with basic shell constructs, probably not possible in shell at all.

Your first example doesn't do what you think. echo doesn't use standard input, so putting it on the right side of a pipe is never a good idea. The $? that you're echoing is not the exit value of the grep 0. All commands in a pipeline run simultaneously. echo has already been started, with the existing value of $?, before the other commands in the pipeline have finished. It echoes the exit value of whatever you did before the pipeline.

# The first command is to set things up so that $? is 2 when the
# second command is parsed.
$ sh -c 'exit 2'
$ echo 1|grep 0|echo $?
2

Your second example is a little more interesting. It's correct to say that wc is unfazed by grep's exit status. All commands in the pipeline are children of the shell, so their exit statuses are reported to the shell. The wc process doesn't know anything about the grep process. The only communication between them is the data stream written to the pipe by grep and read from the pipe by wc.

There are ways to find all the exit statuses after the fact (the linked question in the comment by shx2 has examples) but a basic rule that you can't avoid is that the shell will always wait for all the commands to finish.

Early exits in a pipeline sometimes do have a cascade effect. If a command on the right side of a pipe exits without reading all the data from the pipe, the command on the left of that pipe will get a SIGPIPE signal the next time it tries to write, which by default terminates the process. (The 2 phrases to pay close attention to there are "the next time it tries to write" and "by default". If a the writing process spends a long time doing other things between writes to the pipe, it won't die immediately. If it handles the SIGPIPE, it won't die at all.)

In the other direction, when a command on the left side of a pipe exits, the command on the right side of that pipe gets EOF, which does cause the exit to happen fairly soon when it's a simple command like wc that doesn't do much processing after reading its input.

With direct use of pipe(), fork(), and wait3(), it would be possible to construct a pipeline, notice when one child exits badly, and kill the rest of them immediately. This requires a language more sophisticated than the shell.

I tried to come up with a way to do it in shell with a series of named pipes, but I don't see it. You can run all the processes as separate jobs and get their PIDs with $!, but the wait builtin isn't flexible enough to say "wait for any child in this set to exit, and tell me which one it was and what the exit status was".

If you're willing to mess with ps and/or /proc you can find out which processes have exited (they'll be zombies), but you can't distinguish successful exit from any other kind.

0

Write

set -e 
set -o pipefail 

at the beginning of your file.

-e will exit on an error and -o pipefail will produce an errorcode on each stage of you "pipeline"

Daniel
  • 753
  • 9
  • 15