275

Is there something similar to pipefail for multiple commands, like a 'try' statement but within bash. I would like to do something like this:

echo "trying stuff"
try {
    command1
    command2
    command3
}

And at any point, if any command fails, drop out and echo out the error of that command. I don't want to have to do something like:

command1
if [ $? -ne 0 ]; then
    echo "command1 borked it"
fi

command2
if [ $? -ne 0 ]; then
    echo "command2 borked it"
fi

And so on... or anything like:

pipefail -o
command1 "arg1" "arg2" | command2 "arg1" "arg2" | command3

Because the arguments of each command I believe (correct me if I'm wrong) will interfere with each other. These two methods seem horribly long-winded and nasty to me so I'm here appealing for a more efficient method.

Andy Shulman
  • 1,895
  • 3
  • 23
  • 32
jwbensley
  • 10,534
  • 19
  • 75
  • 93
  • 3
    Take a look to [the unofficial _bash strict mode_](http://redsymbol.net/articles/unofficial-bash-strict-mode/): `set -euo pipefail`. – Pablo Bianchi Mar 25 '17 at 18:14
  • 1
    @PabloBianchi, `set -e` is a *horrid* idea. See [the exercises in BashFAQ #105](http://mywiki.wooledge.org/BashFAQ/105#Exercises) discussing just a few of the unexpected edge cases it introduces, and/or the comparison showing incompatibilities between different shells' (and shell versions') implementations at https://www.in-ulm.de/~mascheck/various/set-e/. – Charles Duffy Jun 18 '19 at 15:16

16 Answers16

282

You can write a function that launches and tests the command for you. Assume command1 and command2 are environment variables that have been set to a command.

function mytest {
    "$@"
    local status=$?
    if (( status != 0 )); then
        echo "error with $1" >&2
    fi
    return $status
}

mytest "$command1"
mytest "$command2"
dimo414
  • 47,227
  • 18
  • 148
  • 244
krtek
  • 26,334
  • 5
  • 56
  • 84
  • 36
    Don't use `$*`, it'll fail if any arguments have spaces in them; use `"$@"` instead. Similarly, put `$1` inside the quotes in the `echo` command. – Gordon Davisson Mar 04 '11 at 16:01
  • 83
    Also I'd avoid the name `test` as that is a built-in command. – John Kugelman Mar 04 '11 at 16:11
  • 1
    This is the method I went with. To be honest, I don't think I was clear enough in my original post but this method allows me to write my own 'test' function so I can then perform an error actions in there I like that are relevant to the actions performed in the script. Thanks :) – jwbensley Mar 26 '11 at 23:11
  • 7
    Wouldn't the exit code returned by test() always return 0 in case of an error since the last command executed was 'echo'. You might need to save the value of $? first. – magiconair Nov 02 '11 at 22:52
  • @john-kugelman Why did you edit the answer in 2014, but not change the name of the function (as you pointed out in 2011)? – tudor -Reinstate Monica- Sep 18 '15 at 02:32
  • @tudor I figure krtek saw my comment and perhaps chose not to change the name. – John Kugelman Sep 18 '15 at 04:16
  • @tudor if you can come up with a better name I'll gladly change it, but since the advise is in the comment I deemed that enough for now. – krtek Sep 18 '15 at 08:56
  • For zsh $status is reserved as well ($? and $status are equivalent.): http://zsh.sourceforge.net/Doc/Release/Parameters.html#index-status http://zsh.sourceforge.net/Intro/intro_13.html – dza Jan 02 '16 at 02:08
  • This doesn't meet the requirements. Your return should be an exit, since the OP asked to "drop out". – SaintHax May 31 '17 at 13:28
  • 2
    This is not a good idea, and it encourages bad practice. Consider the simple case of `ls`. If you invoke `ls foo` and get an error message of the form `ls: foo: No such file or directory\n` you understand the problem. If instead you get `ls: foo: No such file or directory\nerror with ls\n` you become distracted by superfluous information. In this case, It is easy enough to argue that the superfluity is trivial, but it quickly grows. Concise error messages are important. But more importantly, this type of wrapper encourages too writers to completely omit good error messages. – William Pursell Dec 11 '17 at 14:03
  • @GordonDavisson What is a purpose of quotes? – Tomilov Anatoliy Jan 18 '18 at 09:12
189

What do you mean by "drop out and echo the error"? If you mean you want the script to terminate as soon as any command fails, then just do

set -e    # DON'T do this.  See commentary below.

at the start of the script (but note warning below). Do not bother echoing the error message: let the failing command handle that. In other words, if you do:

#!/bin/sh

set -e    # Use caution.  eg, don't do this
command1
command2
command3

and command2 fails, while printing an error message to stderr, then it seems that you have achieved what you want. (Unless I misinterpret what you want!)

As a corollary, any command that you write must behave well: it must report errors to stderr instead of stdout (the sample code in the question prints errors to stdout) and it must exit with a non-zero status when it fails.

However, I no longer consider this to be a good practice. set -e has changed its semantics with different versions of bash, and although it works fine for a simple script, there are so many edge cases that it is essentially unusable. (Consider things like: set -e; foo() { false; echo should not print; } ; foo && echo ok The semantics here are somewhat reasonable, but if you refactor code into a function that relied on the option setting to terminate early, you can easily get bitten.) IMO it is better to write:

 #!/bin/sh

 command1 || exit
 command2 || exit
 command3 || exit

or

#!/bin/sh

command1 && command2 && command3
William Pursell
  • 204,365
  • 48
  • 270
  • 300
  • 1
    Be advised that while this solution is the simplest, it does not let you perform any cleanup on failure. – Josh J Jun 19 '15 at 14:38
  • 7
    Cleanup can be accomplished with traps. (eg `trap some_func 0` will execute `some_func` at exit) – William Pursell Jun 19 '15 at 22:22
  • 3
    Also note that the semantics of errexit (set -e) have changed in different versions of bash, and will often behave unexpectedly during function invocation and other settings. I no longer recommend its use. IMO, it is better to write `|| exit` explicitly after each command. – William Pursell Nov 17 '17 at 14:05
89

I have a set of scripting functions that I use extensively on my Red Hat system. They use the system functions from /etc/init.d/functions to print green [ OK ] and red [FAILED] status indicators.

You can optionally set the $LOG_STEPS variable to a log file name if you want to log which commands fail.

Usage

step "Installing XFS filesystem tools:"
try rpm -i xfsprogs-*.rpm
next

step "Configuring udev:"
try cp *.rules /etc/udev/rules.d
try udevtrigger
next

step "Adding rc.postsysinit hook:"
try cp rc.postsysinit /etc/rc.d/
try ln -s rc.d/rc.postsysinit /etc/rc.postsysinit
try echo $'\nexec /etc/rc.postsysinit' >> /etc/rc.sysinit
next

Output

Installing XFS filesystem tools:        [  OK  ]
Configuring udev:                       [FAILED]
Adding rc.postsysinit hook:             [  OK  ]

Code

#!/bin/bash

. /etc/init.d/functions

# Use step(), try(), and next() to perform a series of commands and print
# [  OK  ] or [FAILED] at the end. The step as a whole fails if any individual
# command fails.
#
# Example:
#     step "Remounting / and /boot as read-write:"
#     try mount -o remount,rw /
#     try mount -o remount,rw /boot
#     next
step() {
    echo -n "$@"

    STEP_OK=0
    [[ -w /tmp ]] && echo $STEP_OK > /tmp/step.$$
}

try() {
    # Check for `-b' argument to run command in the background.
    local BG=

    [[ $1 == -b ]] && { BG=1; shift; }
    [[ $1 == -- ]] && {       shift; }

    # Run the command.
    if [[ -z $BG ]]; then
        "$@"
    else
        "$@" &
    fi

    # Check if command failed and update $STEP_OK if so.
    local EXIT_CODE=$?

    if [[ $EXIT_CODE -ne 0 ]]; then
        STEP_OK=$EXIT_CODE
        [[ -w /tmp ]] && echo $STEP_OK > /tmp/step.$$

        if [[ -n $LOG_STEPS ]]; then
            local FILE=$(readlink -m "${BASH_SOURCE[1]}")
            local LINE=${BASH_LINENO[0]}

            echo "$FILE: line $LINE: Command \`$*' failed with exit code $EXIT_CODE." >> "$LOG_STEPS"
        fi
    fi

    return $EXIT_CODE
}

next() {
    [[ -f /tmp/step.$$ ]] && { STEP_OK=$(< /tmp/step.$$); rm -f /tmp/step.$$; }
    [[ $STEP_OK -eq 0 ]]  && echo_success || echo_failure
    echo

    return $STEP_OK
}
John Kugelman
  • 349,597
  • 67
  • 533
  • 578
  • this is pure gold. While I understand how to use the script I don't fully grasp each step, definitely outside of my bash scripting knowledge but I think it's a work of art nonetheless. – kingmilo Mar 01 '15 at 19:11
  • 2
    Does this tool have a formal name? I'd love to read a man page on this style of step/try/next logging – ThorSummoner Apr 27 '15 at 18:00
  • These shell functions seem to be unavailable on Ubuntu? I was hoping to use this, something portable-ish though – ThorSummoner Jun 21 '15 at 04:14
  • @ThorSummoner, this is likely because Ubuntu uses Upstart instead of SysV init, and will soon be using systemd. RedHat tends to maintain backwards compatibility for long, which is why the init.d stuff is still there. – dragon788 Feb 13 '16 at 16:06
  • 1
    I've posted an expansion on John's solution and allows it to be used on non-RedHat systems such as Ubuntu. See https://stackoverflow.com/a/54190627/308145 – Mark Thomson Jan 14 '19 at 23:24
  • A trivial one line function when there is only one call to try: try_step () { step "$1"; shift; try "$@"; next } So: try_step "Installing XFS filesystem tools:" rpm -i xfsprogs-*.rpm – quimm2003 Apr 20 '22 at 16:24
52

For what it's worth, a shorter way to write code to check each command for success is:

command1 || echo "command1 borked it"
command2 || echo "command2 borked it"

It's still tedious but at least it's readable.

John Kugelman
  • 349,597
  • 67
  • 533
  • 578
  • Didn't think of this, not the method I went with but it is quick and easy to read, thanks for the info :) – jwbensley Mar 26 '11 at 23:13
  • 3
    To execute the commands silently and achieve the same thing: `command1 &> /dev/null || echo "command1 borked it"` – Matt Byrne Jun 02 '14 at 04:13
  • 1
    I'm a fan of this method, is there a way to execute multiple commands after the OR? Something like `command1 || (echo command1 borked it ; exit)` – AndreasKralj Nov 09 '18 at 21:13
  • 2
    @AndreasKralj, yes, you can run one liner to execute multiple commands after failure: command1 || { echo command1 borken it ; exit; } Last semicolon is the must! – Vladimir Perepechenko Mar 15 '23 at 17:30
  • 1
    @VladimirPerepechenko Thank you very much! I've used this method for years now and it's served me well! – AndreasKralj Mar 16 '23 at 19:36
40

An alternative is simply to join the commands together with && so that the first one to fail prevents the remainder from executing:

command1 &&
  command2 &&
  command3

This isn't the syntax you asked for in the question, but it's a common pattern for the use case you describe. In general the commands should be responsible for printing failures so that you don't have to do so manually (maybe with a -q flag to silence errors when you don't want them). If you have the ability to modify these commands, I'd edit them to yell on failure, rather than wrap them in something else that does so.


Notice also that you don't need to do:

command1
if [ $? -ne 0 ]; then

You can simply say:

if ! command1; then

And when you do need to check return codes use an arithmetic context instead of [ ... -ne:

ret=$?
# do something
if (( ret != 0 )); then
dimo414
  • 47,227
  • 18
  • 148
  • 244
35

Instead of creating runner functions or using set -e, use a trap:

trap 'echo "error"; do_cleanup failed; exit' ERR
trap 'echo "received signal to stop"; do_cleanup interrupted; exit' SIGQUIT SIGTERM SIGINT

do_cleanup () { rm tempfile; echo "$1 $(date)" >> script_log; }

command1
command2
command3

The trap even has access to the line number and the command line of the command that triggered it. The variables are $BASH_LINENO and $BASH_COMMAND.

Dennis Williamson
  • 346,391
  • 90
  • 374
  • 439
16

Personally I much prefer to use a lightweight approach, as seen here;

yell() { echo "$0: $*" >&2; }
die() { yell "$*"; exit 111; }
try() { "$@" || die "cannot $*"; }
asuser() { sudo su - "$1" -c "${*:2}"; }

Example usage:

try apt-fast upgrade -y
try asuser vagrant "echo 'uname -a' >> ~/.profile"
Community
  • 1
  • 1
SleepyCal
  • 5,739
  • 5
  • 33
  • 47
8

I've developed an almost flawless try & catch implementation in bash, that allows you to write code like:

try 
    echo 'Hello'
    false
    echo 'This will not be displayed'

catch 
    echo "Error in $__EXCEPTION_SOURCE__ at line: $__EXCEPTION_LINE__!"

You can even nest the try-catch blocks inside themselves!

try {
    echo 'Hello'

    try {
        echo 'Nested Hello'
        false
        echo 'This will not execute'
    } catch {
        echo "Nested Caught (@ $__EXCEPTION_LINE__)"
    }

    false
    echo 'This will not execute too'

} catch {
    echo "Error in $__EXCEPTION_SOURCE__ at line: $__EXCEPTION_LINE__!"
}

The code is a part of my bash boilerplate/framework. It further extends the idea of try & catch with things like error handling with backtrace and exceptions (plus some other nice features).

Here's the code that's responsible just for try & catch:

set -o pipefail
shopt -s expand_aliases
declare -ig __oo__insideTryCatch=0

# if try-catch is nested, then set +e before so the parent handler doesn't catch us
alias try="[[ \$__oo__insideTryCatch -gt 0 ]] && set +e;
           __oo__insideTryCatch+=1; ( set -e;
           trap \"Exception.Capture \${LINENO}; \" ERR;"
alias catch=" ); Exception.Extract \$? || "

Exception.Capture() {
    local script="${BASH_SOURCE[1]#./}"

    if [[ ! -f /tmp/stored_exception_source ]]; then
        echo "$script" > /tmp/stored_exception_source
    fi
    if [[ ! -f /tmp/stored_exception_line ]]; then
        echo "$1" > /tmp/stored_exception_line
    fi
    return 0
}

Exception.Extract() {
    if [[ $__oo__insideTryCatch -gt 1 ]]
    then
        set -e
    fi

    __oo__insideTryCatch+=-1

    __EXCEPTION_CATCH__=( $(Exception.GetLastException) )

    local retVal=$1
    if [[ $retVal -gt 0 ]]
    then
        # BACKWARDS COMPATIBILE WAY:
        # export __EXCEPTION_SOURCE__="${__EXCEPTION_CATCH__[(${#__EXCEPTION_CATCH__[@]}-1)]}"
        # export __EXCEPTION_LINE__="${__EXCEPTION_CATCH__[(${#__EXCEPTION_CATCH__[@]}-2)]}"
        export __EXCEPTION_SOURCE__="${__EXCEPTION_CATCH__[-1]}"
        export __EXCEPTION_LINE__="${__EXCEPTION_CATCH__[-2]}"
        export __EXCEPTION__="${__EXCEPTION_CATCH__[@]:0:(${#__EXCEPTION_CATCH__[@]} - 2)}"
        return 1 # so that we may continue with a "catch"
    fi
}

Exception.GetLastException() {
    if [[ -f /tmp/stored_exception ]] && [[ -f /tmp/stored_exception_line ]] && [[ -f /tmp/stored_exception_source ]]
    then
        cat /tmp/stored_exception
        cat /tmp/stored_exception_line
        cat /tmp/stored_exception_source
    else
        echo -e " \n${BASH_LINENO[1]}\n${BASH_SOURCE[2]#./}"
    fi

    rm -f /tmp/stored_exception /tmp/stored_exception_line /tmp/stored_exception_source
    return 0
}

Feel free to use, fork and contribute - it's on GitHub.

niieani
  • 4,101
  • 1
  • 31
  • 22
  • 1
    I've looked at repo and not gonna use this myself, because it's way too much magic to my taste (IMO it's better to use Python if one needs more abstraction power), but definitely big **+1** from me because it looks just awesome. – Alexander Malakhov Jul 07 '18 at 15:06
  • Thanks for the kind words @AlexanderMalakhov. I agree about the amount of "magic" - that's one of the reasons we're brainstorming a simplified 3.0 version of the framework, which will be much easier to understand, to debug, etc. There's an open issue about 3.0 on GH, if you'd want to chip in your thoughts. – niieani Jul 08 '18 at 16:40
8
run() {
  $*
  if [ $? -ne 0 ]
  then
    echo "$* failed with exit code $?"
    return 1
  else
    return 0
  fi
}

run command1 && run command2 && run command3
Erik
  • 88,732
  • 13
  • 198
  • 189
  • 6
    Don't run `$*`, it'll fail if any arguments have spaces in them; use `"$@"` instead. (Although $* is ok in the `echo` command.) – Gordon Davisson Mar 04 '11 at 16:02
3

Sorry that I can not make a comment to the first answer But you should use new instance to execute the command: cmd_output=$($@)

#!/bin/bash

function check_exit {
    cmd_output=$($@)
    local status=$?
    echo $status
    if [ $status -ne 0 ]; then
        echo "error with $1" >&2
    fi
    return $status
}

function run_command() {
    exit 1
}

check_exit run_command
umount
  • 121
  • 1
  • 4
2

For fish shell users who stumble on this thread.

Let foo be a function that does not "return" (echo) a value, but it sets the exit code as usual.
To avoid checking $status after calling the function, you can do:

foo; and echo success; or echo failure

And if it's too long to fit on one line:

foo; and begin
  echo success
end; or begin
  echo failure
end
Dennis
  • 56,821
  • 26
  • 143
  • 139
2

You can use @john-kugelman 's awesome solution found above on non-RedHat systems by commenting out this line in his code:

. /etc/init.d/functions

Then, paste the below code at the end. Full disclosure: This is just a direct copy & paste of the relevant bits of the above mentioned file taken from Centos 7.

Tested on MacOS and Ubuntu 18.04.


BOOTUP=color
RES_COL=60
MOVE_TO_COL="echo -en \\033[${RES_COL}G"
SETCOLOR_SUCCESS="echo -en \\033[1;32m"
SETCOLOR_FAILURE="echo -en \\033[1;31m"
SETCOLOR_WARNING="echo -en \\033[1;33m"
SETCOLOR_NORMAL="echo -en \\033[0;39m"

echo_success() {
    [ "$BOOTUP" = "color" ] && $MOVE_TO_COL
    echo -n "["
    [ "$BOOTUP" = "color" ] && $SETCOLOR_SUCCESS
    echo -n $"  OK  "
    [ "$BOOTUP" = "color" ] && $SETCOLOR_NORMAL
    echo -n "]"
    echo -ne "\r"
    return 0
}

echo_failure() {
    [ "$BOOTUP" = "color" ] && $MOVE_TO_COL
    echo -n "["
    [ "$BOOTUP" = "color" ] && $SETCOLOR_FAILURE
    echo -n $"FAILED"
    [ "$BOOTUP" = "color" ] && $SETCOLOR_NORMAL
    echo -n "]"
    echo -ne "\r"
    return 1
}

echo_passed() {
    [ "$BOOTUP" = "color" ] && $MOVE_TO_COL
    echo -n "["
    [ "$BOOTUP" = "color" ] && $SETCOLOR_WARNING
    echo -n $"PASSED"
    [ "$BOOTUP" = "color" ] && $SETCOLOR_NORMAL
    echo -n "]"
    echo -ne "\r"
    return 1
}

echo_warning() {
    [ "$BOOTUP" = "color" ] && $MOVE_TO_COL
    echo -n "["
    [ "$BOOTUP" = "color" ] && $SETCOLOR_WARNING
    echo -n $"WARNING"
    [ "$BOOTUP" = "color" ] && $SETCOLOR_NORMAL
    echo -n "]"
    echo -ne "\r"
    return 1
} 
Mark Thomson
  • 739
  • 5
  • 12
1

When I use ssh I need to distinct between problems caused by connection issues and error codes of remote command in errexit (set -e) mode. I use the following function:

# prepare environment on calling site:

rssh="ssh -o ConnectionTimeout=5 -l root $remote_ip"

function exit255 {
    local flags=$-
    set +e
    "$@"
    local status=$?
    set -$flags
    if [[ $status == 255 ]]
    then
        exit 255
    else
        return $status
    fi
}
export -f exit255

# callee:

set -e
set -o pipefail

[[ $rssh ]]
[[ $remote_ip ]]
[[ $( type -t exit255 ) == "function" ]]

rjournaldir="/var/log/journal"
if exit255 $rssh "[[ ! -d '$rjournaldir/' ]]"
then
    $rssh "mkdir '$rjournaldir/'"
fi
rconf="/etc/systemd/journald.conf"
if [[ $( $rssh "grep '#Storage=auto' '$rconf'" ) ]]
then
    $rssh "sed -i 's/#Storage=auto/Storage=persistent/' '$rconf'"
fi
$rssh systemctl reenable systemd-journald.service
$rssh systemctl is-enabled systemd-journald.service
$rssh systemctl restart systemd-journald.service
sleep 1
$rssh systemctl status systemd-journald.service
$rssh systemctl is-active systemd-journald.service
Tomilov Anatoliy
  • 15,657
  • 10
  • 64
  • 169
0

Checking status in functional manner

assert_exit_status() {

  lambda() {
    local val_fd=$(echo $@ | tr -d ' ' | cut -d':' -f2)
    local arg=$1
    shift
    shift
    local cmd=$(echo $@ | xargs -E ':')
    local val=$(cat $val_fd)
    eval $arg=$val
    eval $cmd
  }

  local lambda=$1
  shift

  eval $@
  local ret=$?
  $lambda : <(echo $ret)

}

Usage:

assert_exit_status 'lambda status -> [[ $status -ne 0 ]] && echo Status is $status.' lls

Output

Status is 127
slavik
  • 1,223
  • 15
  • 17
0

suppose

alias command1='grep a <<<abc'
alias command2='grep x <<<abc'
alias command3='grep c <<<abc'

either

{ command1 1>/dev/null || { echo "cmd1 fail"; /bin/false; } } && echo "cmd1 succeed" &&
{ command2 1>/dev/null || { echo "cmd2 fail"; /bin/false; } } && echo "cmd2 succeed" &&
{ command3 1>/dev/null || { echo "cmd3 fail"; /bin/false; } } && echo "cmd3 succeed"

or

{ { command1 1>/dev/null && echo "cmd1 succeed"; } || { echo "cmd1 fail"; /bin/false; } } &&
{ { command2 1>/dev/null && echo "cmd2 succeed"; } || { echo "cmd2 fail"; /bin/false; } } &&
{ { command3 1>/dev/null && echo "cmd3 succeed"; } || { echo "cmd3 fail"; /bin/false; } }

yields

cmd1 succeed
cmd2 fail

Tedious it is. But the readability isn't bad.

Darren Ng
  • 373
  • 5
  • 12
0

If you want to exit with error code 1 right after the command fails:

One liner:

command1 || { echo "command1 borked it"; exit 1; }
command2 || { echo "command2 borked it"; exit 1; }

Be careful to add a space after { and before } as shown above.

Extra info:

Note that you cannot use round brackets/parentheses like this:
command1 || (echo "command1 borked it" && exit 1)

Because (exit 1) will run in subshell. So it will exit from subshell, but it will NOT exit from the parent shell. For more info, please check this answer: https://unix.stackexchange.com/a/172543/513474

Puneeth G R
  • 111
  • 1
  • 8