1

I'm trying to find a way to emulate the behavior of set -e in a function, but only within the scope of that function.

Basically, I want a function where if any simple command would trigger set -e it returns 1 up one level. The goal is to isolate sets of risky jobs into functions so that I can gracefully handle them.

codeforester
  • 39,467
  • 16
  • 112
  • 140
OstermanA
  • 306
  • 2
  • 6

4 Answers4

3

If you want any failing command to return 1, you can achieve that by following each command with || return 1.

For instance:

false || return 1  # This will always return 1

I am a big fan of never letting any command fail without explicit handling. For my scripts, I am using an exception handling technique where I return errors in a way that is not return codes, and trap all errors (with bash traps). Any command with a non-zero return code automatically means an improperly handled situation or bug, and I prefer my scripts to fail as soon as such a situation occurs.

codeforester
  • 39,467
  • 16
  • 112
  • 140
Fred
  • 6,590
  • 9
  • 20
  • I meant inside the function, just after entering. But that does not work either. This is not the way I do error checking, I should have checked first. Removed that part of my response. – Fred Jan 21 '17 at 01:26
  • Sometimes, it's better to trap the trap! – AKS Jan 21 '17 at 04:28
  • What do you mean by "better to trap the trap"? – Fred Jan 21 '17 at 04:30
  • In my case, I was using `trap` cmd for a given signal or when something failed. I found that if you are graceful enough to call a decent function to show some meaninful info before actually exiting with `exit nn` and user is fast enough to click ^C at that moment, trap was calling it's graceful xyz() twice/more, I had to use a flag. – AKS Jan 21 '17 at 04:32
  • Indeed, you have to make sure you do not recurse into your error handling trap, either due to user intervention or a bug. I have a framework I built (I literally use the same trap to catch all errors in all my scripts), so I do not mind the trap (actually the function it calls) to be quite a heavyweight. The actual challenge is finding a good way to manage exceptions (i.e. controlled errors) separately from uncontrolled errors (non-zero return codes). That's the really tricky bit : making it work, but without introducing to much additional code. – Fred Jan 21 '17 at 04:45
0

Caution: I highly advise against using this technique. If you run the function in a subshell environment, you almost get the behavior you desire. Consider:

#!/bin/bash

foo() (  # Use parens to get a sub-shell
        set -e  # Does not impact the main script
        echo This is executed
        false
        echo This should *not* be executed
)

foo  # Function call fails, returns 1
echo return: $?

# BUT: this is a good reason to avoid this technique
if foo; then  # Set -e is invalid in the function
        echo Foo returned 0!!
else
        echo fail
fi
false    # Demonstrates that set -e is not set for the script
echo ok
William Pursell
  • 204,365
  • 48
  • 270
  • 300
  • What would you recommend? Error handling in bash scripts is something I have worked very hard on performing systematically (i.e. everywhere, and with some kind of a framework and not ad-hoc approach), and I am curious as to how other people do it ; I have not seen it discussed in depth anywhere. – Fred Jan 21 '17 at 01:40
  • `set -e` is the closest you can get to exception handling, but it's very primitive. You can do a trap on return to handle clean up in the function and explicitly return on any failure (eg `cmd || return`), and that's probably the best you can get. – William Pursell Jan 21 '17 at 14:43
  • In terms of bash support, yes set -e and traps are pretty much what you have, but in terms of error handling support, you can build an exception handling mechanism that causes all unhandled non-zero return codes to cause the script to abort. This is what I built, and I have never seen anybody else mention doing that that despite the huge advantages, in my view, in terms of long-term code reliability. – Fred Jan 21 '17 at 14:59
  • By the way, I know it sounds like what I am describing seems like set -e ; it really is not, but a few short lines do not allow any significant explanation. – Fred Jan 21 '17 at 15:01
0

Seems like you are looking for "nested exceptions" somewhat like what Java gives. For your requirement of scoping it, how about doing a set -e at the beginning of the function and making sure to run set +e before returning from it?

Another idea, which is not efficient or convenient, is to call your function in a subshell:

# some code

(set -e; my_function)
if [[ $? -ne 0 ]]; then
  # the function didn't succeed...
fi

# more code

In any case, please be aware that set -e is not the greatest way to handle errors in a shell script. There are way too many issues making it quite unreliable. See these related posts:

The approach I take for large scripts that need to exist for a long time in a production environment is:

  • create a library of functions to do all the standard stuff
  • the library will have a wrapper around each standard action (say, mv, cp, mkdir, ln, rm, etc.) that would validate the arguments carefully and also handle exceptions
  • upon exception, the wrapper exits with a clear error message
  • the exit itself could be a library function, somewhat like this:

--

# library of common functions

trap '_error_handler' ERR
trap '_exit_handler'  EXIT
trap '_int_handler'   SIGINT

_error_handler() {
  # appropriate code
}
# other handlers go here...
#

exit_if_error() {
  error_code=${1:-0}
  error_message=${2:-"Uknown error"}

  [[ $error_code == 0 ]] && return 0  # it is all good

  # this can be enhanced to print out the "stack trace"
  >&2 printf "%s\n" $error_message

  # out of here
  my_exit $error_code
}

my_exit() {
  exit_code=${1:-0}
  _global_graceful_exit=1  # this can be checked by the "EXIT" trap handler
  exit $exit_code
}

# simple wrapper for cp
my_cp() {
  # add code to check arguments more effectively
  cp $1 $2
  exit_if_error $? "cp of '$1' to '$2' failed"
}

# main code

source /path/to/library.sh

...
my_cp file1 file2
# clutter-free code

This, along with effective use of trap to take action on ERR and EXIT events, would be a good way to write reliable shell scripts.

Community
  • 1
  • 1
codeforester
  • 39,467
  • 16
  • 112
  • 140
  • `set -e ` doesn't actually work at all for what I want. Even when set inside a function, the entire script aborts instantly. It's no different than setting it before I run that script. http://pastebin.com/p84B5R8b – OstermanA Jan 21 '17 at 06:55
  • Just take a look at the subshell idea in my answer. If you are writing something that needs to run reliably, `set -e` isn't recommended. – codeforester Jan 21 '17 at 07:01
  • Your exit_if_error function is not what I was hoping for... but the more I look at it the more it makes sense. Without a real try/catch mechanism it's probably the best I can do. – OstermanA Jan 21 '17 at 07:04
  • I have used that idea a lot of times in production environments for a long time. When the complexity increased, I switched to Perl for more reliability, but the basic approach didn't change. – codeforester Jan 21 '17 at 07:08
0

Doing more research, I found a solution I rather like in Google's Shell Style Guide. There are some seriously interesting suggestions here, but I think I'm going to go with this for readability:

if ! mv "${file_list}" "${dest_dir}/" ; then
  echo "Unable to move ${file_list} to ${dest_dir}" >&2
  exit "${E_BAD_MOVE}"
fi
OstermanA
  • 306
  • 2
  • 6