1241

I have a program that writes information to stdout and stderr, and I need to process the stderr with grep, leaving stdout aside.

Using a temporary file, one could do it in two steps:

command > /dev/null 2> temp.file
grep 'something' temp.file

But how can this be achieved without temp files, using one command and pipes?

tripleee
  • 175,061
  • 34
  • 275
  • 318
  • 2
    A similar question, but retaining stdout: http://unix.stackexchange.com/questions/3514/how-to-grep-standard-error-stream-stderr – joeytwiddle Apr 02 '14 at 18:48
  • This question was for Bash but it's worth mentioning this related [article](http://mywiki.wooledge.org/BashFAQ/047) for Bourne / Almquist shell. – Stephen Niedzielski Aug 07 '14 at 16:44
  • 5
    @Rolf What do you mean? Bash gets updates fairly regularly; the syntax you propose is not very good, because it conflicts with existing conventions, but you can actually use `|&` to pipe both stderr and stdout (which isn't what the OP is asking exactly, but pretty close to what I guess your proposal could mean). – tripleee Nov 10 '20 at 12:24
  • @tripleee I mean that the development of features or syntax seems to have ended or is happening at a very slow pace therefore we seem to be stuck with syntax that was determined decades ago. – Rolf Nov 11 '20 at 10:23
  • 1
    @Rolf the syntax you proposed is ambiguous in bash. But even if we ignore the ambiguity, is the backwards incompatibility and user frustration really worth it to save a few key strokes, for a 'feature" that replicates existing behavior? – Z4-tier Aug 01 '21 at 19:17
  • 1
    @Z4-tier how is it backwards incompatible and ambiguous? what does `2|` otherwise mean in bash? – Rolf Sep 07 '21 at 07:45
  • 1
    @Rolf These commands would have different behavior: `echo 2 | tee my_file` versus `echo 2|tee my_file`. – Z4-tier Sep 08 '21 at 02:50
  • 2
    @Z4-tier Thanks. `2 | ` is not `2|` indeed, I would not call it ambiguous, more like potentially error-inducing, just like `echo 2 > /myfile` and `echo 2> /myfile` which is even more of an issue. Anyway it's not about saving a few keystrokes, I find the other solutions convoluted and quirky and have yet to wrap my head around them which is why I would just fire up `rc` which has a straightforward syntax for determining the stream that you want to redirect. – Rolf Dec 10 '21 at 12:41

11 Answers11

1519

First redirect stderr to stdout — the pipe; then redirect stdout to /dev/null (without changing where stderr is going):

command 2>&1 >/dev/null | grep 'something'

For the details of I/O redirection in all its variety, see the chapter on Redirections in the Bash reference manual.

Note that the sequence of I/O redirections is interpreted left-to-right, but pipes are set up before the I/O redirections are interpreted. File descriptors such as 1 and 2 are references to open file descriptions. The operation 2>&1 makes file descriptor 2 aka stderr refer to the same open file description as file descriptor 1 aka stdout is currently referring to (see dup2() and open()). The operation >/dev/null then changes file descriptor 1 so that it refers to an open file description for /dev/null, but that doesn't change the fact that file descriptor 2 refers to the open file description which file descriptor 1 was originally pointing to — namely, the pipe.

jtepe
  • 3,242
  • 2
  • 24
  • 31
Jonathan Leffler
  • 730,956
  • 141
  • 904
  • 1,278
  • 56
    i just stumbled across /dev/stdout /dev/stderr /dev/stdin the other day, and I was curious if those are good ways of doing the same thing? I always thought 2>&1 was a bit obfuscated. So something like: `command 2> /dev/stdout 1> /dev/null | grep 'something'` – Mike Lyons Oct 31 '11 at 15:03
  • 18
    You could use `/dev/stdout` et al, or use `/dev/fd/N`. They will be marginally less efficient unless the shell treats them as special cases; the pure numeric notation doesn't involve accessing files by name, but using the devices does mean a file name lookup. Whether you could measure that is debatable. I like the succinctness of the numeric notation - but I've been using it for so long (more than a quarter century; ouch!) that I'm not qualified to judge its merits in the modern world. – Jonathan Leffler Oct 31 '11 at 15:35
  • haha, fair enough Jonathan. I didn't realize that there was an efficiency gain with 2>&1. Thanks for pointing that out – Mike Lyons Nov 06 '11 at 14:56
  • 24
    @Jonathan Leffler: I take a little issue with your plain text explanation *'Redirect stderr to stdout and **then** stdout to /dev/null'* -- Since one has to read redirection chains from right to left (not from left to right), we should also adapt our plain text explanation to this: *'Redirect stdout to /dev/null, and then stderr to where stdout used to be'*. – Kurt Pfeifle Jul 01 '12 at 11:46
  • 137
    @KurtPfeifle: au contraire! One must read the redirection chains from left to right since that is the way the shell processes them. The first operation is the `2>&1`, which means 'connect stderr to the file descriptor that stdout is _currently_ going to'. The second operation is 'change stdout so it goes to `/dev/null`', leaving stderr going to the original stdout, the pipe. The shell splits things at the pipe symbol first, so, the pipe redirection occurs before the `2>&1` or `>/dev/null` redirections, but that's all; the other operations are left-to-right. (Right-to-left wouldn't work.) – Jonathan Leffler Jul 01 '12 at 14:03
  • 1
    You need to parse each redirection operation from right to left. `2>&1` (or `2>& 1`) consists of an operator `2>&` and a file descriptor(fd) argument `1`. The shell than 'dupes' the target of the fd argument to the file descriptor embedded in the operator (`2` in this case). – Henk Langeveld Dec 09 '12 at 12:25
  • 15
    The thing that really surprises me about this is that it works on Windows, too (after renaming `/dev/null` to the Windows equivalent, `nul`). – Michael Burr Dec 10 '12 at 05:15
  • 3
    @KurtPfeifle: I used to have difficulties with understanding the required order of the redirections too. Until I wrote a little shell, and realized how a redirection is done with the `fd_redirect_into = open("file"); close(fd_to_redirect); dup(fd_redirect_into); close(fd_redirect_into);` system-call sequence. – SzG Aug 03 '13 at 05:52
  • 1
    @J.F.Sebastian could you please explain `|&` ? – Vassilis Apr 04 '15 at 14:18
  • 1
    @VassilisGr it is a bash syntax that merges stdout/stderr (like `2>&1`). It is used to capture stderr here: due to `|&:` command's stderr is redirected to stdout, then stdout is redirected to /dev/null, then `grep` receives only stderr from the command on its stdin. – jfs Apr 04 '15 at 14:31
  • 2
    @J.F.Sebastian thanks for your reply. It's a very interesting approach! Someone might check the bash versions though. For example in **GNU bash, version 3.2.53(1)-release-(x86_64-apple-darwin13)** it does raise *syntax error near unexpected token &*. With **GNU Bash 4.3** in linux.. no problem! – Vassilis Apr 04 '15 at 15:19
  • 1
    @J.F.Sebastian Reading from left-to-right (as Jonathan explains) wouldn't `>/dev/null |&` first redirect `stdout` to `/dev/null` and then `2>&1` redirects `stderr` also to `/dev/null`? – legends2k Aug 05 '15 at 07:22
  • 2
    @legends2k: Note that first a pipeline is split into a sequence of commands connected by pipes. `|&` is a special case of `|`; it redirects both standard output and standard error to the pipe. Then the plain redirections are processed left-to-right. So, `command > /dev/null |& grep 'something'` splits the pipeline at the `|&`. On the LHS, the standard output and standard error are redirected to the pipe; then the `>/dev/null` redirection sends standard output to `/dev/null`, so only standard error is going to the pipe. The `grep` reads from the pipe, looking for 'something'. – Jonathan Leffler Aug 05 '15 at 07:30
  • 4
    @legends2k: Overall, `command >/dev/null |& grep 'something'` is equivalent to `command 2>&1 >/dev/null | grep 'something'`. See also [pipelines](http://www.gnu.org/software/bash/manual/bash.html#Pipelines) in the Bash manual. In fact, that says: _If ‘`|&`’ is used, command1’s standard error, in addition to its standard output, is connected to command2’s standard input through the pipe; it is shorthand for `2>&1 |`. This implicit redirection of the standard error to the standard output is performed after any redirections specified by the command._ Ugh! Well, it is what it is; RTFM applies. – Jonathan Leffler Aug 05 '15 at 07:31
  • And I should add that the RTFM applies at least as much to me as anyone else; it wasn't until I read the fine print that I found the slightly odd behaviour of `|&`. – Jonathan Leffler Aug 05 '15 at 08:09
  • Yes. I quoted the manual. It supersedes my interpretation of it. I won't be using `|&`; it is unnecessary and does not conform to my idea of what is useful to me. Others may do as they wish. – Jonathan Leffler Aug 05 '15 at 08:38
  • 1
    @JonathanLeffler Found it! Both our interpretations that `|&` is a special case is based on Sebastian's comment (and partly due to the number of upvotes it garnered). However, that comment is incorrect. See [here](http://goo.gl/gq5puV). I've come to tell you that your interpretation, without the special case, and as the manual explains is actually correct. So `>/dev/null |&` first redirects `stdout` to `NUL` and `stderr` also gets pointed to `NUL` (left-to-right); this it's not what the OP wants. – legends2k Aug 05 '15 at 09:57
  • 1
    @legends2k: I've removed the incorrect comment. I thought that `|&` processing happens *before* the individiual redirections but as [the manual](http://www.gnu.org/software/bash/manual/bash.html#Pipelines) says it happens *after* and the actual behavior confirms that it happens after the redirections i.e., `command >/dev/null |& grep 'something'` is equivalent to `command >/dev/null 2>&1 | grep 'something'` (all standard output is to `/dev/null`. `grep` sees nothing) – jfs Aug 05 '15 at 11:50
  • @KurtPfeifle I'm amused by "we have to read from right to left" and "no, we must read from left to right". Both statements are wrong, of course, since there are valid interpretations that could reasonably be labelled each way. Personally my brain always wants `command $a>&$b $c>&d $e>&$f` to do `((command $a>&$b) $c>&d) $e>&$f` (i.e. "left-to-right" in a sense), but that's wrong. Instead, it does `((command $e>&$f) $c>&d) $a>&$b` (i.e. a "right-to-left" reading of the unparenthesized pipeline, in a sense). Others have given "left-to-right" explanations which are valid as well. – Don Hatch Jan 29 '16 at 13:56
  • @JonathanLeffler; Is `command &>1 >/dev/null | grep 'something'` equivalent to `command 2>&1 >/dev/null | grep 'something'`? – haccks Aug 02 '16 at 15:24
  • 3
    `&>1` is a Bash neologism; in my (archaic, cranky) view, it's horrid and I'd not touch it with a barge-pole. If my reading of the Bash manual on [`&>`](https://www.gnu.org/software/bash/manual/bash.html#Redirecting-Standard-Output-and-Standard-Error) is to be trusted, `&>1` means 'redirect standard output and standard error to a file called `1`' — and not to file descriptor number 1. I've not experimented with it; I have no plans to do so at the moment. I don't find it a useful addition to the shell syntax. – Jonathan Leffler Aug 02 '16 at 15:35
  • OK. Actually I have seen a test like `if kill -0 &>1 > /dev/null $pid` to check whether a process, with process id stored in variable `pid`, is running or not. This was the source of confusion. I guess it should be either `if kill -0 &> /dev/null $pid` or `if kill -0 $pid > /dev/null 2>&1`. – haccks Aug 03 '16 at 12:00
  • 1
    Conversely, if you want to see only standard output and not standard error you can do "command 1>&2 2> /dev/null". This is implied by the answer but wanted to spell it out for those simply seeking an answer for how to do this. – George Co May 16 '17 at 13:54
  • @GeorgeColpitts: Normally, if you want to lose standard error, you use just `command 2>/dev/null`. Using `command 1>&2 2>/dev/null` means that the standard output of the command goes to where the standard error was going (probably the terminal) — that's the `1>&2` part — and then standard error (but not the standard output) is sent to `/dev/null`. It isn't 100% wrong to use both, but it is unusual. Note, too, that the sequence matters. `command 2>/dev/null 1>&2` sends standard error to `/dev/null` and then sends standard output to the same place. – Jonathan Leffler May 16 '17 at 15:09
  • I have come back and read @JonathanLeffler's Jul 1 '12 comment every 3 to 6 months since he first made it. Today, for the first time, it… not only made sense, but felt… rhetorical. Like, "duh, of course." I think I'm finally done understanding it. I'll check back in a few months. – Bruno Bronosky May 19 '17 at 14:26
  • If this hangs and doesn't continue after execution try wrapping your command in `{}` curly brackets. Example `{command} 2>&1 >/dev/null | grep 'something'` – hfossli Jun 19 '18 at 09:58
  • Note that you can replace `/dev/null` with a filename to redirect stdout to a file and stderr to a process. – randomuser5215 Mar 29 '19 at 22:00
  • Note that this doesn't work (for some reason) in zsh: you need to wrap the whole thing in a subshell. See https://stackoverflow.com/questions/58019928/pipe-stderr-and-not-stdout-not-working-in-zsh – D0SBoots Sep 24 '19 at 02:32
  • **On syntax:** It might be useful to note that `>/dev/null` is equivalent to `1>/dev/null`. Furthermore, without the `&` in `2>&1`, it would write to a *file* named `1` rather than the *file descriptor* `1`. – Mateen Ulhaq May 13 '21 at 04:47
405

Or to swap the output from standard error and standard output over, use:

command 3>&1 1>&2 2>&3

This creates a new file descriptor (3) and assigns it to the same place as 1 (standard output), then assigns fd 1 (standard output) to the same place as fd 2 (standard error) and finally assigns fd 2 (standard error) to the same place as fd 3 (standard output).

Standard error is now available as standard output and the old standard output is preserved in standard error. This may be overkill, but it hopefully gives more details on Bash file descriptors (there are nine available to each process).

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Kramish
  • 4,067
  • 1
  • 14
  • 2
  • 132
    A final tweak would be `3>&-` to close the spare descriptor that you created from stdout – Jonathan Leffler Jul 05 '12 at 23:59
  • 2
    Can we create a file descriptor that has `stderr` and another that has the combination of `stderr` and `stdout`? In other words can `stderr` go to two different files at once? – Stuart Feb 08 '14 at 01:34
  • The following still prints errors to stdout. What am I missing? ls -l not_a_file 3>&1 1>&2 2>&3 > errors.txt – user48956 Feb 27 '14 at 03:34
  • @user48956 - I'm adding `/etc/passwd` to your command so it'll have non-empty stdout, to make things clearer. If your mind is like mine, you're assuming your `ls -l /etc/passwd not_a_file 3>&1 1>&2 2>&3 > errors.txt` should give you the same as `(ls -l /etc/passwd not_a_file 3>&1 1>&2 2>&3) > errors.txt`, which is wrong. You can get the latter if desired by typing exactly that. On the other hand if your goal is simply to redirect 2 to a file, that's way easier: `ls -l /etc/passwd not_a_file 2> errors.txt` . – Don Hatch Jan 29 '16 at 12:21
  • @user48956 - To understand what your proposed command `ls -l /etc/passwd not_a_file 3>&1 1>&2 2>&3 > errors.txt` actually does, start by following Kramish's description; by the end of it, you've effectively swapped 1 and 2 which when run from command line isn't very interesting since they were originally both pointing at the terminal, so again they both point at the terminal. Your final `> errors.txt`, i.e. `1> errors.txt`, means the prog's output 1 (listing /etc/passwd) gets finally redirected to errors.txt, with its output 2 (complaining about not_a_file) still pointed at the terminal. – Don Hatch Jan 29 '16 at 12:22
  • 2
    @JonathanLeffler Out of curiosity, does your tweak serve any purpose performance-wise, other than perhaps clarifying the role of file descriptor (3) for an observer? – Jonas Dahlbæk Mar 07 '17 at 14:46
  • 2
    @JonasDahlbæk: the tweak is primarily an issue of tidiness. In truly arcane situations, it might make the difference between a process detecting and not detecting EOF, but that requires very peculiar circumstances. – Jonathan Leffler Mar 07 '17 at 14:54
  • I think the main difference with `3>&-` is that `write(3, "blarg")` in the command will immediately fail rather than eventually lock up the app. – Simon Buchan Aug 21 '17 at 07:06
  • 5
    **Caution**: this assumes FD 3 is not already in use, doesn't close it, and doesn't undo the swapping of file descriptors 1 and 2, so you can't go on to pipe this to yet another command. See [this answer](https://stackoverflow.com/a/52575213/5353461) for further detail and work-around. For a much cleaner syntax for {ba,z}sh, see [this answer](https://stackoverflow.com/a/52575087/5353461). – Tom Hale Sep 30 '18 at 06:56
  • @Tom What is the scope for closing a FD? So is `>&-` just closing the stdout for one command and therefore shorter than `> /dev/null`? – Cadoiz Dec 07 '21 at 13:47
  • Or is it doing real harm? @Jonathan How would you undo this? – Cadoiz Dec 07 '21 at 13:48
  • @Cadoiz — I don't understand your question. How would I undo what? – Jonathan Leffler Dec 07 '21 at 23:48
  • Closing an FD with `>&-`. Or is it only for the scope of one command? – Cadoiz Dec 08 '21 at 11:58
269

In Bash, you can also redirect to a subshell using process substitution:

command > >(stdout pipe)  2> >(stderr pipe)

For the case at hand:

command 2> >(grep 'something') >/dev/null
Camille Goudeseune
  • 2,934
  • 2
  • 35
  • 56
Rich Johnson
  • 2,699
  • 1
  • 13
  • 3
  • 1
    Works very well for output to the screen. Do you have any idea why the ungrepped content appears again if I redirect the grep output into a file? After `command 2> >(grep 'something' > grep.log)` grep.log contains the same the same output as ungrepped.log from `command 2> ungrepped.log` – Tim Aug 20 '13 at 14:44
  • 13
    Use `2> >(stderr pipe >&2)`. Otherwise the output of the "stderr pipe" will go through the "stdlog pipe". – ceving Oct 28 '16 at 10:14
  • yeah!, `2> >(...)` works, i tried `2>&1 > >(...)` but it didn't – Dee Oct 04 '18 at 07:33
  • Here's a small example that may help me next time I look-up how to do this. Consider the following ... `awk -f /new_lines.awk out-content.txt 2> >(tee new_lines.log 1>&2 )` In this instance I wanted to _also_ see what was coming out as errors on my console. But STDOUT was going to the output file. So inside the sub-shell, you need to redirect that STDOUT back to STDERR inside the parentheses. While that works, the STDOUT output from the `tee` command winds-up at the end of the `out-content.txt` file. That seems inconsistient to me. – will Oct 25 '19 at 01:41
  • @datdinhquoc I did it somehow like `2>&1 1> >(dest pipe)` – Alireza Mohamadi Dec 13 '19 at 11:07
  • @Alireza from my understanding, you then get both the stderr and stdout in your pipe. To all: pay attention that you have to type `... >(...`, not `... > (...` (the space is wrong) – Cadoiz Dec 07 '21 at 13:54
232

Combining the best of these answers, if you do:

command 2> >(grep -v something 1>&2)

...then all stdout is preserved as stdout and all stderr is preserved as stderr, but you won't see any lines in stderr containing the string "something".

This has the unique advantage of not reversing or discarding stdout and stderr, nor smushing them together, nor using any temporary files.

Pinko
  • 3,309
  • 2
  • 20
  • 15
  • Isn't `command 2> >(grep -v something)` (without `1>&2`) the same? – Francesc Rosas Oct 06 '13 at 12:53
  • 14
    No, without that, the filtered stderr ends up being routed to stdout. – Pinko Oct 10 '13 at 15:13
  • 1
    This is what I needed - tar outputs "file changed as we read it" for a directory always, so just want to filter out that one line but see if any other errors occur. So `tar cfz my.tar.gz mydirectory/ 2> >(grep -v 'changed as we read it' 1>&2)` should work. – razzed Mar 23 '16 at 20:10
  • 1
    this the only valid answer to the question. – Spongman Sep 30 '22 at 18:16
  • @Pinko are you sure your comment still stands? I'm doing some tests and it seems that is not the case, at least not anymore. – André Chalella Jun 14 '23 at 01:10
120

It's much easier to visualize things if you think about what's really going on with "redirects" and "pipes." Redirects and pipes in bash do one thing: modify where the process file descriptors 0, 1, and 2 point to (see /proc/[pid]/fd/*).

When a pipe or "|" operator is present on the command line, the first thing to happen is that bash creates a fifo and points the left side command's FD 1 to this fifo, and points the right side command's FD 0 to the same fifo.

Next, the redirect operators for each side are evaluated from left to right, and the current settings are used whenever duplication of the descriptor occurs. This is important because since the pipe was set up first, the FD1 (left side) and FD0 (right side) are already changed from what they might normally have been, and any duplication of these will reflect that fact.

Therefore, when you type something like the following:

command 2>&1 >/dev/null | grep 'something'

Here is what happens, in order:

  1. a pipe (fifo) is created. "command FD1" is pointed to this pipe. "grep FD0" also is pointed to this pipe
  2. "command FD2" is pointed to where "command FD1" currently points (the pipe)
  3. "command FD1" is pointed to /dev/null

So, all output that "command" writes to its FD 2 (stderr) makes its way to the pipe and is read by "grep" on the other side. All output that "command" writes to its FD 1 (stdout) makes its way to /dev/null.

If instead, you run the following:

command >/dev/null 2>&1 | grep 'something'

Here's what happens:

  1. a pipe is created and "command FD 1" and "grep FD 0" are pointed to it
  2. "command FD 1" is pointed to /dev/null
  3. "command FD 2" is pointed to where FD 1 currently points (/dev/null)

So, all stdout and stderr from "command" go to /dev/null. Nothing goes to the pipe, and thus "grep" will close out without displaying anything on the screen.

Also note that redirects (file descriptors) can be read-only (<), write-only (>), or read-write (<>).

A final note. Whether a program writes something to FD1 or FD2, is entirely up to the programmer. Good programming practice dictates that error messages should go to FD 2 and normal output to FD 1, but you will often find sloppy programming that mixes the two or otherwise ignores the convention.

Michael Martinez
  • 2,693
  • 1
  • 16
  • 19
  • 7
    Really nice answer. My one suggestion would be to replace your first use of "fifo" with "fifo (a named pipe)". I've been using Linux for a while but somehow never managed to learn that is another term for named pipe. This would have saved me from looking it up, but then again I wouldn't have learned the other stuff I saw when I found that out! – Mark Edington Apr 26 '16 at 19:36
  • 5
    @MarkEdington Please note that FIFO is only another term for named pipe *in the context of pipes and IPC*. In a more general context, FIFO means First in, first out, which describes insertion and removal from a queue data structure. – Loomchild Jan 23 '17 at 08:25
  • 7
    @Loomchild Of course. The point of my comment was that even as a seasoned developer, I had never seen FIFO used as a *synonym* for named pipe. In other words, I didn't know this: https://en.wikipedia.org/wiki/FIFO_(computing_and_electronics)#Pipes - Clarifying that in the answer would have saved me time. – Mark Edington Jan 23 '17 at 14:18
53

If you are using Bash, then use:

command >/dev/null |& grep "something"

http://www.gnu.org/software/bash/manual/bashref.html#Pipelines

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Ken Sharp
  • 934
  • 9
  • 22
  • 11
    Nope, `|&` is equal to `2>&1` which combines stdout and stderr. The question explicitly asked for output *without* stdout. – Profpatsch Dec 21 '14 at 13:42
  • 3
    „If ‘|&’ is used, the standard error of command1 is connected to command2’s standard input through the pipe; it is shorthand for 2>&1 |” Taken verbatim from the fourth paragraph at your link. – Profpatsch Dec 21 '14 at 22:07
  • 12
    @Profpatsch: Ken's answer is correct, look that he redirects stdout to null before combining stdout and stderr, so you'll get in pipe only the stderr, because stdout was previously droped to /dev/null. – Luciano Aug 20 '15 at 13:08
  • I use `mplayer a 2>/dev/null |& grep i`(got output) and `mplayer a >/dev/null |& grep i`(no output at all ! ) to test in which file name `a` doesn't exist and wonder why your answer doesn't works, may be due to bash version ? Then i just figure out i need to use fd 3, i.e. `mplayer a 3>/dev/null |& grep i` to get the output lol. – 林果皞 Mar 31 '16 at 11:23
  • 4
    But i still found your answer is wrong, `>/dev/null |&` expand to `>/dev/null 2>&1 |` and means stdout inode is empty to pipe because nobody(#1 #2 both tied to /dev/null inode) is tied to stdout inode (e.g. `ls -R /tmp/* >/dev/null 2>&1 | grep i` will give empty, but `ls -R /tmp/* 2>&1 >/dev/null | grep i` will lets #2 which tied to stdout inode will pipe). – 林果皞 Mar 31 '16 at 13:04
  • 5
    Ken Sharp, I tested, and `( echo out; echo err >&2 ) >/dev/null |& grep "."` gives no output (where we want "err"). `man bash` says *If |& is used … is shorthand for 2>&1 |. This implicit redirection of the standard error to the standard output is performed after any redirections specified by the command.* So first we redirect command's FD1 to null, then we redirect command's FD2 to where FD1 pointed, ie. null, so grep's FD0 gets no input. See http://stackoverflow.com/a/18342079/69663 for a more in-depth explanation. – unhammer Sep 22 '16 at 07:06
12

For those who want to redirect stdout and stderr permanently to files, grep on stderr, but keep the stdout to write messages to a tty:

# save tty-stdout to fd 3
exec 3>&1
# switch stdout and stderr, grep (-v) stderr for nasty messages and append to files
exec 2> >(grep -v "nasty_msg" >> std.err) >> std.out
# goes to the std.out
echo "my first message" >&1
# goes to the std.err
echo "a error message" >&2
# goes nowhere
echo "this nasty_msg won't appear anywhere" >&2
# goes to the tty
echo "a message on the terminal" >&3
JBD
  • 121
  • 1
  • 3
10

This will redirect command1 stderr to command2 stdin, while leaving command1 stdout as is.

exec 3>&1
command1 2>&1 >&3 3>&- | command2 3>&-
exec 3>&-

Taken from LDP

theDolphin
  • 1,571
  • 1
  • 10
  • 5
  • So if I'm understanding this correctly, we start by duplicating the stdout of the current process (`3>&1`). Next redirect `command1`'s error to its output (`2>&1`), then _point_ stdout of `command1` to the parent process's copy of stdout (`>&3`). Clean up the duplicated file descriptor in the `command1` (`3>&-`). Over in `command2`, we just need to also delete the duplicated file descriptor (`3>&-`). These duplicates are caused when the parent forked itself to create both processes, so we just clean them up. Finally in the end, we delete the parent process's file descriptor (`3>&-`). – smac89 Apr 16 '21 at 04:28
  • In the end, we have `command1`'s original stdout pointer, now pointing to the parent process's stdout, while its stderr is pointing to where its stdout used to be, making it the new stdout for `command2`. – smac89 Apr 16 '21 at 04:37
4

I just came up with a solution for sending stdout to one command and stderr to another, using named pipes.

Here goes.

mkfifo stdout-target
mkfifo stderr-target
cat < stdout-target | command-for-stdout &
cat < stderr-target | command-for-stderr &
main-command 1>stdout-target 2>stderr-target

It's probably a good idea to remove the named pipes afterward.

Tripp Kinetics
  • 5,178
  • 2
  • 23
  • 37
1

You can use the rc shell.

First install the package (it's less than 1 MB).

This an example of how you would discard standard output and pipe standard error to grep in rc:

find /proc/ >[1] /dev/null |[2] grep task

You can do it without leaving Bash:

rc -c 'find /proc/ >[1] /dev/null |[2] grep task'

As you may have noticed, you can specify which file descriptor you want piped by using brackets after the pipe.

Standard file descriptors are numerated as such:

  • 0 : Standard input
  • 1 : Standard output
  • 2 : Standard error
Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Rolf
  • 5,550
  • 5
  • 41
  • 61
  • 1
    Suggesting installing an entirely different shell seems kindof drastic to me. – xdhmoore Feb 15 '21 at 18:06
  • 1
    @xdhmoore What's so drastic about it? It does not replace the default shell and the software only takes up a few K of space. The `rc` syntax for piping stderr is way better than what you would have to do in `bash` so I think it is worth a mention. – Rolf Feb 16 '21 at 15:13
-3

I try follow, find it work as well,

command > /dev/null 2>&1 | grep 'something'
lasteye
  • 95
  • 1
  • 5