84

I've a pipline doing just

 command1 | command2

So, stdout of command1 goes to command2 , while stderr of command1 go to the terminal (or wherever stdout of the shell is).

How can I pipe stderr of command1 to a third process (command3) while stdout is still going to command2 ?

oHo
  • 51,447
  • 27
  • 165
  • 200
  • Have a look at [my answer to *Pipe output to two different commands*](https://stackoverflow.com/a/13108173/1765658) and [my *Intro about parallelisation*](https://stackoverflow.com/a/19125525/1765658)! – F. Hauri - Give Up GitHub Oct 24 '22 at 06:10

8 Answers8

81

Use another file descriptor

{ command1 2>&3 | command2; } 3>&1 1>&2 | command3

You can use up to 7 other file descriptors: from 3 to 9.
If you want more explanation, please ask, I can explain ;-)

Test

{ { echo a; echo >&2 b; } 2>&3 | sed >&2 's/$/1/'; } 3>&1 1>&2 | sed 's/$/2/'

output:

b2
a1

Example

Produce two log files:
1. stderr only
2. stderr and stdout

{ { { command 2>&1 1>&3; } | tee err-only.log; } 3>&1; } > err-and-stdout.log

If command is echo "stdout"; echo "stderr" >&2 then we can test it like that:

$ { { { echo out>&3;echo err>&1;}| tee err-only.log;} 3>&1;} > err-and-stdout.log
$ head err-only.log err-and-stdout.log
==> err-only.log <==
err

==> err-and-stdout.log <==
out
err
oHo
  • 51,447
  • 27
  • 165
  • 200
  • How do you add a file descriptor? `echo out >&3` outputs "-bash: 3: Bad file descriptor" – Isaac Betesh Aug 26 '13 at 20:47
  • 2
    Found the answer here: http://unix.stackexchange.com/questions/18899/when-would-you-use-an-additional-file-descriptor – Isaac Betesh Aug 26 '13 at 20:55
  • 4
    antak's answer below is more complete. It still maintains the original separation between stdout and stderr as the command does without all the pipes. Note that with pipes, command is run in a subprocess. If you don't want that, for you may want the command to modify global variables, you would need to create fifo and use redirections instead. – jxy Feb 24 '17 at 04:32
  • Thanks, @oHo. BTW, is there a way to preserve the `command`'s exit code, esp. given the `tee` eats it up. I.e. a next command `rc=$?` saves `0` to `rc`. – Andrevinsky Oct 11 '21 at 14:55
54

The accepted answer results in the reversing of stdout and stderr. Here's a method that preserves them (since Googling on that purpose brings up this post):

{ command 2>&1 1>&3 3>&- | stderr_command; } 3>&1 1>&2 | stdout_command

Notice:

  • 3>&- is required to prevent fd 3 from being inherited by command. (As this can lead to unexpected results depending on what command does inside.)

Parts explained:

  1. Outer part first:

    1. 3>&1 -- fd 3 for { ... } is set to what fd 1 was (i.e. stdout)
    2. 1>&2 -- fd 1 for { ... } is set to what fd 2 was (i.e. stderr)
    3. | stdout_command -- fd 1 (was stdout) is piped through stdout_command
  2. Inner part inherits file descriptors from the outer part:

    1. 2>&1 -- fd 2 for command is set to what fd 1 was (i.e. stderr as per outer part)
    2. 1>&3 -- fd 1 for command is set to what fd 3 was (i.e. stdout as per outer part)
    3. 3>&- -- fd 3 for command is set to nothing (i.e. closed)
    4. | stderr_command -- fd 1 (was stderr) is piped through stderr_command

Example:

foo() {
    echo a
    echo b >&2
    echo c
    echo d >&2
}

{ foo 2>&1 1>&3 3>&- | sed -u 's/^/err: /'; } 3>&1 1>&2 | sed -u 's/^/out: /'

Output:

out: a
err: b
err: d
out: c

(Order of a -> c and b -> d will always be indeterminate because there's no form of synchronization between stderr_command and stdout_command.)

antak
  • 19,481
  • 9
  • 72
  • 80
  • This thing works, I verified it but I am not able to understand how it works. In the outer part, Point 3 stdout_command isn't fd1 now pointing to stderr, how is stdout going there instead of stderr. – Rahul Kadukar Mar 11 '16 at 21:32
  • Infact this also worked (command 2>&1 | stderr_command; ) 1>&2 | stdout_command – Rahul Kadukar Mar 11 '16 at 22:01
  • @RahulKadukar That puts both `stdout` and `stderr` of `command` through `stderr_command` and nothing goes through `stdout_command`. – antak Jul 03 '19 at 23:53
  • 1
    I enjoyed unraveling this, thank you (: Note: you could make it a little shorter by having the innermost redirects be merely `2>&3 3>&-`. This does, however, mean you need to handle stdout on the inside of the curlies and stderr on the outside (so, swap `stdin_command` and `stdout_command` in your example). – jwd Jul 14 '19 at 04:50
  • @jwd Thanks for the comment. :) Problem with that approach is `stdout` and `stderr` of the entire command line comes out reversed. I tested it by adding `>/dev/null` on the end of the command line and seeing if only `a` and `c` were filtered out. – antak Jul 14 '19 at 09:10
  • Oh good point. And I guess there's no easy way to get around that since the outermost pipe will write to the shell's `stdout` regardless of redirects (unless we added more curlies). No golf points for me (: – jwd Jul 15 '19 at 18:35
  • This is great, thanks @antak! But I found that while it works great in bash, it doesn't work as-is in zsh due to multios. I posted a tweaked version for zsh as [another answer](https://stackoverflow.com/a/59638811/2562319). – jbyler Jan 08 '20 at 02:55
  • My quick’n’dirty go-to for stuff like this (particularly when I’m just typing — and thinking — left-to-right at an interactive shell) has usually been something along the lines of `{ cmd 3>&1 1>&2 2>&3 3>&- | tee err.log; } 3>&1 1>&2 2>&3 3>&- | tee out.log` (swap out/err, process err; swap again, process out…) but the succinctity of this is quite appealing, and I suspect it’s more efficient to boot, since it allows the subshell to inherit the temporary file descriptor (as subprocesses were designed to do) instead of taking pains to recreate it unnecessarily… I may have to start using this. :) – Mark G. Jan 20 '20 at 19:37
31

Using process substitution:

command1 > >(command2) 2> >(command3)

See http://tldp.org/LDP/abs/html/process-sub.html for more info.

FuePi
  • 1,958
  • 22
  • 18
  • 9
    Note: this is not POSIX but a bashism. – josch Jan 17 '19 at 08:00
  • 5
    It's also so much nicer than the POSIX solutions. – goji Nov 04 '21 at 02:13
  • This does not work. Using the example `foo` function from https://stackoverflow.com/a/31151808, the output of `foo > >(sed -u 's/^/out: /') 2> >(sed -u 's/^/err: /')` is `out: a` `out: c` `out: err: b` `out: err: d` – Clement Cherlin May 18 '23 at 19:48
  • Interestingly, reversing the order *does* work. `foo 2> >(sed -u 's/^/err: /') > >(sed -u 's/^/out: /')` outputs `err: b` `err: d` `out: a` `out: c` – Clement Cherlin May 18 '23 at 19:52
  • Hi @ClementCherlin, the order of 1> and 2> shouldn't matter: `$ mkdir -v test > >(sed 's/^/OUT: /') 2> >(sed 's/^/ERR: /') OUT: mkdir: created directory ‘test’ $ mkdir -v test > >(sed 's/^/OUT: /') 2> >(sed 's/^/ERR: /') OUT: ERR: mkdir: cannot create directory ‘test’: File exists $ rmdir test $ mkdir -v test 2> >(sed 's/^/ERR: /') > >(sed 's/^/OUT: /') OUT: mkdir: created directory ‘test’ $ mkdir -v test 2> >(sed 's/^/ERR: /') > >(sed 's/^/OUT: /') ERR: mkdir: cannot create directory ‘test’: File exists` Are you sure about which output of `foo` is goint to STDOUT/STDERR? – FuePi May 19 '23 at 09:06
  • Maybe an easier example: `$ (echo out >&1; echo err >&2) 1> >(sed 's/^/OUT: /') 2> >(sed 's/^/ERR: /') $ OUT: out OUT: ERR: err $ (echo out >&1; echo err >&2) 2> >(sed 's/^/ERR: /') 1> >(sed 's/^/OUT: /') $ ERR: err OUT: out` – FuePi May 19 '23 at 09:21
  • @FuePi do you see in your own output `OUT: out OUT: ERR: err`? That's the problem. `err` is being processed by the `ERR` script, and then the output of the `ERR` script is going to the `OUT` script. – Clement Cherlin May 24 '23 at 17:54
  • Hi @ClementCherlin, that is due to echo adding a newline. Try this: `(echo -n out >&1; echo err >&2) 1> >(sed 's/^/OUT: /') 2> >(sed 's/^/ERR: /')` The output is for me: `OUT: outERR: err` – FuePi Jun 06 '23 at 14:25
  • @FuePi Unfortunately, it's not related to echo, it's the order of redirections. `(printf out >&1; printf err >&2) 1> >(sed 's/^\(.\+\)$/FIRST<\1>/') 2> >(sed 's/^\(.\+\)$/SECOND[\1]/')` produces `FIRST`, but `(printf out >&1; printf err >&2) 2> >(sed 's/^\(.\+\)$/FIRST<\1>/') 1> >(sed 's/^\(.\+\)$/SECOND[\1]/')` produces `SECOND[out]FIRST` – Clement Cherlin Jun 15 '23 at 16:04
17

Simply redirect stderr to stdout

{ command1 | command2; } 2>&1 | command3

Caution: commnd3 will also read command2 stdout (if any).
To avoid that, you can discard commnd2 stdout:

{ command1 | command2 >/dev/null; } 2>&1 | command3

However, to keep command2 stdout (e.g. in the terminal),
then please refer to my other answer more complex.

Test

{ { echo -e "a\nb\nc" >&2; echo "----"; } | sed 's/$/1/'; } 2>&1 | sed 's/$/2/'

output:

a2
b2
c2
----12
oHo
  • 51,447
  • 27
  • 165
  • 200
  • 1
    Whoops, good call. I intially thought the OP wanted stderr to go *only* to `command 3`. This looks like the right way to go. – FatalError Feb 02 '12 at 14:22
  • 1
    Wouldn't `{ command1 | command2 >/dev/null 2>&1 } 2>&1 | command3 ` prevent stdout/stderr of command2 to reach command3 , or would that also mess with stderr of command1 ? –  Feb 02 '12 at 15:02
  • Hi @user964970. The `/dev/null` redirection is a good idea. As you said, your example above mess `stderr` and `stdout` because they are inverted in the same step. I would prefer `{ command1 | command2 >/dev/null; } 2>&1 | command3`. I edit my answer to use your brilliant contribution. Thanks ;-) – oHo Feb 02 '12 at 15:38
  • One problem with this answer is that the { } creates a subshell, which in some situations is not acceptabe. For instance, you cannot pass variables back out of the {}. – Kevin Keane May 17 '20 at 19:20
2

Pipe stdout as usual, but use Bash process substitution for the stderr redirection:

some_command 2> >(command of stderr) | command of stdout

Header: #!/bin/bash

iBug
  • 35,554
  • 7
  • 89
  • 134
2

Zsh Version

I like the answer posted by @antak, but it doesn't work correctly in zsh due to multios. Here is a small tweak to use it in zsh:

{ unsetopt multios; command 2>&1 1>&3 3>&- | stderr_command; } 3>&1 1>&2 | stdout_command

To use, replace command with the command you want to run, and replace stderr_command and stdout_command with your desired pipelines. For example, the command ls / /foo will produce both stdout output and stderr output, so we can use it as a test case. To save the stdout to a file called stdout and the stderr to a file called stderr, you can do this:

{ unsetopt multios; ls / /foo 2>&1 1>&3 3>&- | cat >stderr; } 3>&1 1>&2 | cat >stdout

See @antak's original answer for full explanation.

jbyler
  • 7,200
  • 3
  • 34
  • 42
  • This worked in zsh for me, except stderr and stdout commands were flipped for me. Both bash and zsh. It looks like to me you are redirecting stdout to 3, then routing 3 back to stdout, so the last statement should be to stdout. I have no idea. – DeusXMachina Mar 10 '20 at 18:45
1

The same effect can be accomplished fairly easily with a fifo. I'm not aware of a direct piping syntax for doing it (though it would be nifty to see one). This is how you might do it with a fifo.

First, something that prints to both stdout and stderr, outerr.sh:

#!/bin/bash

echo "This goes to stdout"
echo "This goes to stderr" >&2

Then we can do something like this:

$ mkfifo err
$ wc -c err &
[1] 2546
$ ./outerr.sh 2>err | wc -c
20
20 err
[1]+  Done                    wc -c err

That way you set up the listener for stderr output first and it blocks until it has a writer, which happens in the next command, using the syntax 2>err. You can see that each wc -c got 20 characters of input.

Don't forget to clean up the fifo after you're done if you don't want it to hang around (i.e. rm). If the other command wants input on stdin and not a file arg, you can use input redirection like wc -c < err too.

FatalError
  • 52,695
  • 14
  • 99
  • 116
  • 2
    Looks like the OP wanted *both* `stdout` and `stderr` to go to `command2`, which I initially missed. The above separates the two and send each separately to a command. I'll leave it though, as it might be useful to somebody. – FatalError Feb 02 '12 at 14:23
  • No, I do not want both stdout and stderr to go to command2. stdout of command1 to command2, stderr of command1 to command3. command2 should not get stderr of command1 –  Feb 02 '12 at 14:57
0

It's been a long time but...

@oHo's answer has the disadvantage of redirecting command2 outputs to stderr. While @antak's answer may reverse the order of the outputs.

The solution below is likely to fix these problems by correctly redirecting command2 and command3 outputs and errors to, respectively, stdout and stderr, as expected and preserving order.

{ { command1 2>&3 | command2; } 3>&1 1>&4 | command3; } 4>&1

Of course, it also satisfies the OP's need to redirect output and errors from command1 to, respectively, command2 and command3.

shrike
  • 4,449
  • 2
  • 22
  • 38