3

Here's an example of what I'm trying to achieve:

#!/bin/bash
set -e    # abort if error
...
command1 2>&1 | command2
...

And I notice that sometimes command1 fails but command2 does not and the shell script happily continues... if I did not have to use the pipe here, the set -e would have been sufficient but now it does not work with the pipe there...

Any thoughts? Thanks

codeforester
  • 39,467
  • 16
  • 112
  • 140
revit09
  • 103
  • 3
  • 1
    that solution delays the processing of the second command line until the first one has finished and uses temp files and their overhead. not ideal in my situation. I want the piped commands do their work and only fail when one of them happens to return an error... Using temp files would be more of an issue when the output of previous commnds could grow very huge in size... – revit09 Feb 17 '13 at 16:52

1 Answers1

6

Since you are using bash, other than set -e you can also add set -o pipefail to get the results you want...

Reza Toghraee
  • 1,603
  • 1
  • 14
  • 21