26

In a loop in shell script, I am connecting to various servers and running some commands. For example

#!/bin/bash
FILENAME=$1
cat $FILENAME | while read HOST
do
   0</dev/null ssh $HOST 'echo password| sudo -S 
   echo $HOST 
   echo $?      
   pwd
   echo $?'
done

Here I am running "echo $HOST" and "pwd" commands and I am getting exit status via "echo $?".

My question is that I want to be able to store the exit status of the commands I run remotely in some variable and then ( based on if the command was success or not) , write a log to a local file.

Any help and code is appreciated.

Kashif Usmani
  • 269
  • 1
  • 3
  • 3

4 Answers4

39

ssh will exit with the exit code of the remote command. For example:

$ ssh localhost exit 10
$ echo $?
10

So after your ssh command exits, you can simply check $?. You need to make sure that you don't mask your return value. For example, your ssh command finishes up with:

echo $?

This will always return 0. What you probably want is something more like this:

while read HOST; do
  echo $HOST
  if ssh $HOST 'somecommand' < /dev/null; then
    echo SUCCESS
  else
    echo FAIL
done

You could also write it like this:

while read HOST; do
  echo $HOST
  if ssh $HOST 'somecommand' < /dev/null
  if [ $? -eq 0 ]; then
    echo SUCCESS
  else
    echo FAIL
done
larsks
  • 277,717
  • 41
  • 399
  • 399
  • 1
    Are you saying that I need to ssh separately to run each command so that I can extract their exit status? I am trying to run at least 5 commands in same ssh session in my real code. – Kashif Usmani Mar 13 '13 at 17:01
  • Hi, @larsks. Can you explain what `ssh localhost exit 10` means? It means execute ssh localhost with an explicit 10 as status code? Here, exit is an option of ssh or the linux command `exit`? – Gab是好人 Jan 19 '17 at 15:26
  • Remember that the basic syntax of the ssh command is `ssh `. So in the above answer, `exit 10` is the command passed to the remote shell. – larsks Jan 19 '17 at 19:50
  • @larsks quite an old answer, but still: what if you are executing 10+ commands and not just one? what is the best practice to catch an error that perhaps occurred and exit prematurely? – trainoasis Aug 16 '19 at 06:44
  • Stick your commands in a script and then `ssh remotehost bash < your_script.sh`, maybe, and make sure your script exits with an appropriate error code if something fails. `set -e` may be useful in this case, which causes the script to abort with an error if any command in the script fails. Although if you're running lots of commands over lots of hosts, it's probably time you were looking into tools like [Ansible](https://ansible.com) or similar. – larsks Aug 16 '19 at 10:59
2

You can assign the exit status to a variable as simple as doing:

variable=$?

Right after the command you are trying to inspect. Do not echo $? before or the new value of $? will be the exit code of echo (usually 0).

xtonousou
  • 569
  • 5
  • 16
0

An interesting approach would be to retrieve the whole output of each ssh command set in a local variable using backticks, or even seperate with a special charachter (for simplicity say ":") something like:

export MYVAR=`ssh $HOST 'echo -n ${HOSTNAME}\:;pwd'`

after this you can use awk to split MYVAR into your results and continue bash testing.

gdh
  • 498
  • 3
  • 14
0

Perhaps prepare the log file on the other side and pipe it to stdout, like this:

ssh -n user@example.com 'x() { local ret; "$@" >&2; ret=$?; echo "[`date +%Y%m%d-%H%M%S` $ret] $*"; return $ret; };
x true
x false
x sh -c "exit 77";'  > local-logfile 

Basically just prefix everything on the remote you want to invoke with this x wrapper. It works for conditionals, too, as it does not alter the exit code of a command.

You can easily loop this command.

This example writes into the log something like:

[20141218-174611 0] true
[20141218-174611 1] false
[20141218-174611 77] sh -c exit 77

Of course you can make it better parsable or adapt it to your whishes how the logfile shall look like. Note that the uncatched normal stdout of the remote programs is written to stderr (see the redirection in x()).

If you need a recipe to catch and prepare output of a command for the logfile, here is a copy of such a catcher from https://gist.github.com/hilbix/c53d525f113df77e323d - but yes, this is a bit bigger boilerplate to "Run something in current context of shell, postprocessing stdout+stderr without disturbing return code":

# Redirect lines of stdin/stdout to some other function
# outfn and errfn get following arguments
# "cmd args.." "one line full of output"
: catch outfn errfn cmd args..
catch()
{
local ret o1 o2 tmp
tmp=$(mktemp "catch_XXXXXXX.tmp")
mkfifo "$tmp.out"
mkfifo "$tmp.err"
pipestdinto "$1" "${*:3}" <"$tmp.out" &
o1=$!
pipestdinto "$2" "${*:3}" <"$tmp.err" &
o2=$!
"${@:3}" >"$tmp.out" 2>"$tmp.err"
ret=$?
rm -f "$tmp.out" "$tmp.err" "$tmp"
wait $o1
wait $o2
return $ret
}

: pipestdinto cmd args..
pipestdinto()
{
local x
while read -r x; do "$@" "$x" </dev/null; done
}

STAMP()
{
date +%Y%m%d-%H%M%S
}

# example output function
NOTE()
{
echo "NOTE `STAMP`: $*"
}

ERR()
{
echo "ERR `STAMP`: $*" >&2
}

catch_example()
{
# Example use
catch NOTE ERR find /proc -ls
}

See the second last line for an example (scroll down)

Tino
  • 9,583
  • 5
  • 55
  • 60