20

Is there an alternative to tee which captures standard output and standard error of the command being executed and exits with the same exit status as the processed command?

Something like the following:

eet -a some.log -- mycommand --foo --bar

Where "eet" is an imaginary alternative to "tee" :) (-a means append and -- separates the captured command). It shouldn't be hard to hack such a command, but maybe it already exists and I'm not aware of it?

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
pachanga
  • 3,003
  • 4
  • 30
  • 45
  • 3
    I assume that the real question here is: how to tee output AND capture exit status. If so: possible duplicate of [bash: tee output AND capture exit status](http://stackoverflow.com/questions/1221833/bash-tee-output-and-capture-exit-status) – Lesmana May 13 '13 at 16:36

9 Answers9

34

This works with Bash:

(
  set -o pipefail
  mycommand --foo --bar | tee some.log
)

The parentheses are there to limit the effect of pipefail to just the one command.

From the bash(1) man page:

The return status of a pipeline is the exit status of the last command, unless the pipefail option is enabled. If pipefail is enabled, the pipeline's return status is the value of the last (rightmost) command to exit with a non-zero status, or zero if all commands exit successfully.
Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Ville Laurikari
  • 28,380
  • 7
  • 60
  • 55
11

I stumbled upon a couple of interesting solutions at Capture Exit Code Using Pipe & Tee.

  1. There is the $PIPESTATUS variable available in Bash:

    false | tee /dev/null
    [ $PIPESTATUS -eq 0 ] || exit $PIPESTATUS
    
  2. And the simplest prototype of "eet" in Perl may look as follows:

    open MAKE, "command 2>&1 |" or die;
    open (LOGFILE, ">>some.log") or die;
    while (<MAKE>) { 
        print LOGFILE $_; 
        print 
    }
    close MAKE; # To get $?
    my $exit = $? >> 8;
    close LOGFILE;
    
Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
pachanga
  • 3,003
  • 4
  • 30
  • 45
5

Here's an eet. Works with every Bash I can get my hands on, from 2.05b to 4.0.

#!/bin/bash
tee_args=()
while [[ $# > 0 && $1 != -- ]]; do
    tee_args=("${tee_args[@]}" "$1")
    shift
done
shift
# now ${tee_args[*]} has the arguments before --,
# and $* has the arguments after --

# redirect standard out through a pipe to tee
exec | tee "${tee_args[@]}"

# do the *real* exec of the desired program
exec "$@"

(pipefail and $PIPESTATUS are nice, but I recall them being introduced in 3.1 or thereabouts.)

ephemient
  • 198,619
  • 38
  • 280
  • 391
  • It's strange - but it doesn't work for me: jirka@debian:~/monitor$ exec | wc -c \\ 0 \\ jirka@debian:~/monitor$ exec echo a \\ a (\\ means newline) – jpalecek Mar 10 '10 at 11:05
3

This is what I consider to be the best pure-Bourne-shell solution to use as the base upon which you could build your "eet":

# You want to pipe command1 through command2:
exec 4>&1
exitstatus=`{ { command1; echo $? 1>&3; } | command2 1>&4; } 3>&1`
# $exitstatus now has command1's exit status.

I think this is best explained from the inside out – command1 will execute and print its regular output on stdout (file descriptor 1), then once it's done, echo will execute and print command1's exit code on its stdout, but that stdout is redirected to file descriptor three.

While command1 is running, its stdout is being piped to command2 (echo's output never makes it to command2 because we send it to file descriptor 3 instead of 1, which is what the pipe reads). Then we redirect command2's output to file descriptor 4, so that it also stays out of file descriptor one – because we want file descriptor one clear for when we bring the echo output on file descriptor three back down into file descriptor one so that the command substitution (the backticks) can capture it.

The final bit of magic is that first exec 4>&1 we did as a separate command – it opens file descriptor four as a copy of the external shell's stdout. Command substitution will capture whatever is written on standard out from the perspective of the commands inside it – but, since command2's output is going to file descriptor four as far as the command substitution is concerned, the command substitution doesn't capture it – however, once it gets "out" of the command substitution, it is effectively still going to the script's overall file descriptor one.

(The exec 4>&1 has to be a separate command to work with many common shells. In some shells it works if you just put it on the same line as the variable assignment, after the closing backtick of the substitution.)

(I use compound commands ({ ... }) in my example, but subshells (( ... )) would also work. The subshell will just cause a redundant forking and awaiting of a child process, since each side of a pipe and the inside of a command substitution already normally implies a fork and await of a child process, and I don't know of any shell being coded to recognize that it can skip one of those forks because it's already done or is about to do the other.)

You can look at it in a less technical and more playful way, as if the outputs of the commands are leapfrogging each other: command1 pipes to command2, then the echo's output jumps over command2 so that command2 doesn't catch it, and then command2's output jumps over and out of the command substitution just as echo lands just in time to get captured by the substitution so that it ends up in the variable, and command2's output goes on its way to the standard output, just as in a normal pipe.

Also, as I understand it, at the end of this command, $? will still contain the return code of the second command in the pipe, because variable assignments, command substitutions, and compound commands are all effectively transparent to the return code of the command inside them, so the return status of command2 should get propagated out.

A caveat is that it is possible that command1 will at some point end up using file descriptors three or four, or that command2 or any of the later commands will use file descriptor four, so to be more hygienic, we would do:

exec 4>&1
exitstatus=`{ { command1 3>&-; echo $? 1>&3; } 4>&- | command2 1>&4; } 3>&1`
exec 4>&-

Commands inherit file descriptors from the process that launches them, so the entire second line will inherit file descriptor four, and the compound command followed by 3>&1 will inherit the file descriptor three. So the 4>&- makes sure that the inner compound command will not inherit file descriptor four, and the 3>&- makes sure that command1 will not inherit file descriptor three, so command1 gets a 'cleaner', more standard environment. You could also move the inner 4>&- next to the 3>&-, but I figure why not just limit its scope as much as possible.

Almost no programs uses pre-opened file descriptor three and four directly, so you almost never have to worry about it, but the latter is probably best to keep in mind and use for general-purpose cases.

mtraceur
  • 3,254
  • 24
  • 33
2
{ mycommand --foo --bar 2>&1; ret=$?; } | tee -a some.log; (exit $ret)
Peter Ericson
  • 1,889
  • 1
  • 12
  • 4
  • 4
    At least on bash, $ret is not available outside of the {list}, so this doesn't work. – Steve Madsen Jun 12 '09 at 13:34
  • I like a variation of this solution where the value of $? is written to a file: `{ mycommand --foo --bar; echo $? > exit-code; } | tee some.log; ret=$(cat exit-code)`. Yes, it's convoluted to write to a temp file, but it's POSIX-friendly, unlike pipefail which is a bashism and also doesn't work in some scenarios. – ctrueden Dec 02 '22 at 01:44
1

KornShell, all in one line:

foo; RET_VAL=$?; if test ${RET_VAL} != 0;then echo $RET_VAL; echo Error occurred!>/tmp/out.err;exit 2;fi |tee >>/tmp/out.err ; if test ${RET_VAL} != 0;then exit $RET_VAL;fi
Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
sal
  • 11
  • 1
0

I was also looking for a one-liner that works in do shell script in AppleScript, which always uses /bin/sh (emulated by zsh). This version is the only one I found that works well:

mycommand 2>&1 | tee -a output.log; exit ${PIPESTATUS[0]}

or in AppleScript

set theResult to do shell script "mycommand 2>&1 | tee -a " & quoted form of logFilePath & "; exit ${PIPESTATUS[0]}"
Nebula
  • 157
  • 5
-2

Assuming Bash or Z shell (zsh),

my_command >>my_log 2>&1

N.B. The sequence of redirection and duplication of standard error onto standard output is significant!

I didn't realise you wanted to see the output on screen as well. This will of course direct all output to the file my_log.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Rob Wells
  • 36,220
  • 13
  • 81
  • 146
-2
#!/bin/sh
logfile="$1"
shift
exec 2>&1
exec "$@" | tee "$logfile"

Hopefully this works for you.

Marko Teiste
  • 352
  • 1
  • 4
  • Run this with arguments "foo false" -- should return with exit code 1 (from false), but I get 0 (presumably from tee). – Steve Madsen Jun 12 '09 at 13:29
  • My bad. Have to do it the old fashioned pipe way. PIPE=/tmp/$$.pipe; mkfifo "$PIPE"; logfile="$1"; shift; tee "$logfile" <"$PIPE" &; "$@" 2>&1 >"$PIPE"; status=$?; rm "$PIPE"; exit $status – Marko Teiste Jun 14 '09 at 08:37
  • You can [edit (change)](https://stackoverflow.com/posts/985931/edit) your answer. (But ***without*** "Edit:", "Update:", or similar - the answer should appear as if it was written today.) – Peter Mortensen Jan 31 '22 at 22:01