0

There are a few layers here, so bear with me.

My docker-container ssh -c"echo 'YAY!'; exit 25;" command executes echo 'YAY!'; exit 25; in my docker container. It returns:

YAY
error:  message=YAY!
, code=25

I need to know if the command within the container was successful, so I append the following to the command:

docker-container ssh -c"echo 'YAY!'; exit 25;"  >&1 2>/tmp/stderr; cat /tmp/stderr | grep 'code=' | cut -d'=' -f2 | { read exitStatus; echo $exitStatus; }

This sends the stderr to /tmp/stderr and, with the echo $exitStatus returns:

YAY!
25

So, this is exactly what I want. I want the $exitStatus saved to a variable. My problem is, I am placing this into a bash script (GIT pre-commit) and when this exact code is executed, the exit status is null.

Here is my bash script:

# .git/hooks/pre-commit

if [ -z ${DOCKER_MOUNT+x} ];
then
    docker-container ssh -c"echo 'YAY!'; exit 25;"  >&1 2>/tmp/stderr; cat /tmp/stderr | grep 'code=' | cut -d'=' -f2 | { read exitStatus; echo $exitStatus; }

    exit $exitStatus;
else
    echo "Container detected!"
fi;
tylersDisplayName
  • 1,603
  • 4
  • 24
  • 42
  • 1
    Eh? You *can't possibly* be getting `null` as an exit status; the only possible values are integers. You might have `0`, but you can't have `null`. – Charles Duffy Feb 16 '18 at 20:32
  • 2
    [ShellCheck](http://ShellCheck.net) automatically detects this issue. – that other guy Feb 16 '18 at 20:34
  • *nod* -- if you expect the `exit` in the pipeline to cause the script to exit, well... something that explicitly asked why that doesn't happen with the irrelevant context factored out would actually be a much clearer question than this is. – Charles Duffy Feb 16 '18 at 20:35
  • 2
    This is in a respect duplicative of [bash: why piping to read only works when fed into `while read` construct](https://stackoverflow.com/questions/13763942/bash-why-piping-input-to-read-only-works-when-fed-into-while-read-const), insofar as understanding the answer to the other explains why this doesn't work. [BashFAQ #24](http://mywiki.wooledge.org/BashFAQ/024) is pertinent reading as well. – Charles Duffy Feb 16 '18 at 20:37
  • 1
    BTW, writing to `/tmp/stderr` is dangerous if you're on a shared system. Two scripts using that same name at once is a problem, but a much *worse* problem is if another user (or a compromised network service -- and remember, `/tmp` is world-writable!) replaces `/tmp/stderr` with a symlink to a file you have permission to write to but they don't, and which they want to see deleted. Don't use hardcoded temporary filenames -- this is what `mktemp` exists to avoid. – Charles Duffy Feb 16 '18 at 20:40
  • 1
    BTW, `cat foo | grep bar` is significantly less efficient than `grep bar – Charles Duffy Feb 16 '18 at 20:44

1 Answers1

1

That's because you're setting the variable in a pipeline. Each command in the pipeline is run in a subshell, and when the subshell exits the variable are no longer available.

bash allows you to run the pipeline's last command in the current shell, but you also have to turn off job control

An example

# default bash
$ echo foo | { read x; echo x=$x; } ; echo x=$x
x=foo
x=

# with "lastpipe" configuration
$ set +m; shopt -s lastpipe

$ echo foo | { read x; echo x=$x; } ; echo x=$x
x=foo
x=foo

Add set +m; shopt -s lastpipe to your script and you should be good.


And as Charles comments, there are more efficient ways to do it. Like this:

source <(docker-container ssh -c "echo 'YAY!'; exit 25;" 2>&1 1>/dev/null | awk -F= '/code=/ {print "exitStatus=" $2}')
echo $exitStatus
glenn jackman
  • 238,783
  • 38
  • 220
  • 352