I want to redirect both standard output and standard error of a process to a single file. How do I do that in Bash?

- 30,738
- 21
- 105
- 131

- 261,858
- 191
- 397
- 503
-
3I'd like to say this is a surprisingly useful question. Many people do not know how to do this, as they don't have to do so frequently, and it is not the best documented behavior of Bash. – Robert Wm Ruedisueli Sep 03 '17 at 05:21
-
4Sometimes it's useful to see the output (as usual) AND to redirect it to a file. See the answer by Marko below. (I say this here because it's easy to just look at the first accepted answer if that's sufficient to solve a problem, but other answers often provide useful information.) – jvriesem May 08 '18 at 19:00
15 Answers
Take a look here. It should be:
yourcommand &> filename
It redirects both standard output and standard error to file filename.

- 30,738
- 21
- 105
- 131

- 108,024
- 16
- 131
- 187
-
36This syntax is deprecated according to the [Bash Hackers Wiki](http://wiki.bash-hackers.org/syntax/redirection). Is it? – Samuel Katz Jul 11 '12 at 01:10
-
13I guess we should not use &> as it is not in POSIX, and common shells such as "dash" do not support it. – Sam Watkins Apr 23 '13 at 08:24
-
33An extra hint: If you use this in a script, make sure it starts with `#!/bin/bash` rather than `#!/bin/sh`, since in requires bash. – Tor Klingberg Oct 01 '13 at 17:47
-
it is the simplest way. unfortunately there is needed redirect outputs for all commands inside the scripts, or .... creating subprocess using parentheses. better way described quizad using exec calls for reopen script outputs. – Znik Dec 08 '14 at 09:47
-
On my machine, this puts `yourcommand` in the background and redirects stdout to `filename` but not stderr. – Big McLargeHuge May 15 '16 at 16:45
-
15
-
@AlexanderGonchiy What shell are you using? This only works to redirect to filepath in bash. – rounce Dec 24 '18 at 14:37
-
domain wiki.bash-hackers.org is parked. The site contains ads only. – Brian Fitzgerald Jul 26 '23 at 17:00
do_something 2>&1 | tee -a some_file
This is going to redirect standard error to standard output and standard output to some_file
and print it to standard output.

- 30,738
- 21
- 105
- 131

- 30,263
- 18
- 74
- 108
-
19On AIX (ksh) your solution works. The accepted answer `do_something &>filename` doesn't. +1. – Withheld Jan 04 '13 at 16:01
-
14
-
3
-
1I have a ruby script (which I don't want to modify in any way) that prints error messages in bold red. This ruby script is then invoked from my bash script (which I can modify). When I use the above, it prints the error messages in plain text, minus the formatting. Is there any way to retain on-screen formatting and get the output (both stdout and stderr) in a file as well? – atlantis Oct 30 '14 at 02:09
-
13Note that (by default) this has the side-effect that `$?` no longer refers to the exit status of `do_something`, but the exit status of `tee`. – Flimm Jan 20 '15 at 14:09
-
Those few words "stderr to stdout and stdout to some_file" helped me understand something I was struggling to understand since long – Ali Jan 19 '17 at 05:41
-
Why the pipe to `tee`? Why is this better than using normal file redirection? – rounce Dec 24 '18 at 14:34
-
@AlexandreHoldenDaly Depends on your shell. I have run into that problem on tcsh. On bash it works without a problem. – Praveen Lobo Sep 05 '19 at 21:16
-
1If you want to overwrite the log file every time, remove the `-a` flag. This flag appends the output to the log file if it exists. – byxor Jan 17 '20 at 10:55
-
1I need this tattooed on my body somewhere, can never remember the order of the chars! – rob Mar 08 '22 at 14:15
-
-
1FYI i just notice `2>&1` and `2>& 1` work. But `2 >&1` does NOT work. There must be no space between 2 and >&. – midnite Jun 11 '23 at 20:39
You can redirect stderr to stdout and the stdout into a file:
some_command >file.log 2>&1
See Chapter 20. I/O Redirection
This format is preferred over the most popular &>
format that only works in Bash. In Bourne shell it could be interpreted as running the command in background. Also the format is more readable - 2 (is standard error) redirected to 1 (standard output).

- 30,738
- 21
- 105
- 131

- 29,500
- 10
- 66
- 86
-
1What is the advantage of this approach over some_command &> file.log? – ubermonkey May 27 '09 at 14:04
-
8If you want to append to a file then you must do it this way: echo "foo" 2>&1 1>> bar.txt AFAIK there's no way to append using &> – SlappyTheFish Jun 08 '10 at 10:58
-
13
-
Why the current approved answer is preferred before this answer is beyond me, why would someone want to use 'tee' if it's not necessary. – hbogert Jan 14 '15 at 09:52
-
34I think the interpretation that 2>&1 redirects stderr to stdout is wrong; I believe it is more accurate to say it sends stderr to the same place that stdout is going at this moment in time. Thus place 2>&1 *after* the first redirect is essential. – jdg Aug 07 '15 at 17:47
-
2@SlappyTheFish, actually, there *is* a way: "&>>" From bash man: " The format for appending standard output and standard error is: &>>word This is semantically equivalent to >>word 2>&1 " – Alexander Gonchiy Jan 08 '18 at 11:41
-
re: `what is the advantage ...` -- if you happen to be forced to use a bash enough older than bash 4, then `&>` and `>&` might not even be available. This syntax has been available since forever-ish. – Jesse Chisholm Apr 27 '18 at 18:01
-
If 2>&1 must follow >file then what about merging both streams for piping to "less". Is this ````2>&1 | less```` correct? – JohnMudd Feb 27 '20 at 13:05
-
This works, but when I start the script, I want to still be able to type commands on the command line, and I can't here – Daniel C Jacobs May 12 '20 at 16:22
-
+1. Prefer this approach to the now depreacted `&>`, according to: Obsolete and deprecated syntax [Bash Hackers Wiki (DEV 20200708T2203)] : https://wiki-dev.bash-hackers.org/scripting/obsolete . – Timur Shtatland Sep 11 '20 at 15:40
# Close standard output file descriptor
exec 1<&-
# Close standard error file descriptor
exec 2<&-
# Open standard output as $LOG_FILE file for read and write.
exec 1<>$LOG_FILE
# Redirect standard error to standard output
exec 2>&1
echo "This line will appear in $LOG_FILE, not 'on screen'"
Now, a simple echo will write to $LOG_FILE, and it is useful for daemonizing.
To the author of the original post,
It depends what you need to achieve. If you just need to redirect in/out of a command you call from your script, the answers are already given. Mine is about redirecting within current script which affects all commands/built-ins (includes forks) after the mentioned code snippet.
Another cool solution is about redirecting to both standard error and standard output and to log to a log file at once which involves splitting "a stream" into two. This functionality is provided by 'tee' command which can write/append to several file descriptors (files, sockets, pipes, etc.) at once: tee FILE1 FILE2 ... >(cmd1) >(cmd2) ...
exec 3>&1 4>&2 1> >(tee >(logger -i -t 'my_script_tag') >&3) 2> >(tee >(logger -i -t 'my_script_tag') >&4)
trap 'cleanup' INT QUIT TERM EXIT
get_pids_of_ppid() {
local ppid="$1"
RETVAL=''
local pids=`ps x -o pid,ppid | awk "\\$2 == \\"$ppid\\" { print \\$1 }"`
RETVAL="$pids"
}
# Needed to kill processes running in background
cleanup() {
local current_pid element
local pids=( "$$" )
running_pids=("${pids[@]}")
while :; do
current_pid="${running_pids[0]}"
[ -z "$current_pid" ] && break
running_pids=("${running_pids[@]:1}")
get_pids_of_ppid $current_pid
local new_pids="$RETVAL"
[ -z "$new_pids" ] && continue
for element in $new_pids; do
running_pids+=("$element")
pids=("$element" "${pids[@]}")
done
done
kill ${pids[@]} 2>/dev/null
}
So, from the beginning. Let's assume we have a terminal connected to /dev/stdout (file descriptor #1) and /dev/stderr (file descriptor #2). In practice, it could be a pipe, socket or whatever.
- Create file descriptors (FDs) #3 and #4 and point to the same "location" as #1 and #2 respectively. Changing file descriptor #1 doesn't affect file descriptor #3 from now on. Now, file descriptors #3 and #4 point to standard output and standard error respectively. These will be used as real terminal standard output and standard error.
- 1> >(...) redirects standard output to command in parentheses
- Parentheses (sub-shell) executes 'tee', reading from exec's standard output (pipe) and redirects to the 'logger' command via another pipe to the sub-shell in parentheses. At the same time it copies the same input to file descriptor #3 (the terminal)
- the second part, very similar, is about doing the same trick for standard error and file descriptors #2 and #4.
The result of running a script having the above line and additionally this one:
echo "Will end up in standard output (terminal) and /var/log/messages"
...is as follows:
$ ./my_script
Will end up in standard output (terminal) and /var/log/messages
$ tail -n1 /var/log/messages
Sep 23 15:54:03 wks056 my_script_tag[11644]: Will end up in standard output (terminal) and /var/log/messages
If you want to see clearer picture, add these two lines to the script:
ls -l /proc/self/fd/
ps xf

- 30,738
- 21
- 105
- 131

- 2,530
- 1
- 12
- 6
-
1only one exception. in the first example you wrote: exec 1<>$LOG_FILE . it cause original logfile is allways owerwritten. for real loggin better way is: exec 1>>$LOG_FILE it cause log is allways appended. – Znik Dec 08 '14 at 09:43
-
5That's true although it depends on intentions. My approach is to always create a unique and timestamped log file. The other is to append. Both ways are 'logrotateable'. I prefer separate files which require less parsing but as I said, whatever makes your boat floating :) – quizac Dec 08 '14 at 11:02
-
-
You'd have to save a copy of STDOUT descriptor to let's say #11 by running 'exec 11>&1' before closing FD#1(first line) and when you're finished with file logging, you can redirect it back to STDOUT by adding 'exec 1>&11'. In theory, it should work although, I haven't tested it. – quizac Mar 08 '16 at 11:19
-
1Your second solution is informative, but what's with all the cleanup code? It doesn't seem relevant, and if so, only muddles an otherwise good example. I'd also like to see it reworked slightly so that FDs 1 and 2 aren't redirected to the logger but rather 3 and 4 are so that anything calling this script might manipulate 1 and 2 further under the common assumption the stdout==1 and stderr==2, but my brief experimentation suggests that's more complex. – JFlo Jun 23 '17 at 13:18
-
1I like it better with the cleanup code. It might be a bit of distraction from the core example, but stripping it would make the example incomplete. The net is already full of examples without error handling, or at least a friendly note that it still needs about a hundred lines of code to make is safe to use. – Zoltan K. Jul 09 '17 at 11:23
-
1The cleanup code there is necessary and it removes processes in the background if parent PID is killed or exits for any reason. 'logger' processes might get detached and continue to run while parent process is already gone. This example is a part of bigger script and I always include this snippet(among others) due to infrequent issues here and there. – quizac Jul 09 '17 at 13:40
-
3I wanted to elaborate on clean-up code. It's a part of script which daemonizes ergo becomes immune to HANG-UP signal. 'tee' and 'logger' are processes spawned by the same PPID and they inherit HUP trap from main bash script. So, once the main process dies they become inherited by init[1]. They will not become zombies(defunc). The clean-up code makes sure that all background tasks are killed, if main script dies. It also applies to any other process which might have been created and running in background. – quizac May 31 '18 at 14:24
-
I wish I could upvote this a hundred times! THAT'S a MINIMUM estimate of how many times I wanted to use a tool like tee and, well, had to invent my own way because I didn't know it existed, and neither did anyone else around me! THANKS HEAPS! – Richard T Mar 13 '23 at 03:05
bash your_script.sh 1>file.log 2>&1
1>file.log
instructs the shell to send standard output to the file file.log
, and 2>&1
tells it to redirect standard error (file descriptor 2) to standard output (file descriptor 1).
Note: The order matters as liw.fi pointed out, 2>&1 1>file.log
doesn't work.

- 30,738
- 21
- 105
- 131

- 11,478
- 3
- 24
- 22
-
To me, the second way makes more sense. First send all of stderr to stdout, then send stdout to the file. Why would we want to send stderr to stdout after stdout goes to the file? – Alaska Apr 14 '23 at 13:35
Curiously, this works:
yourcommand &> filename
But this gives a syntax error:
yourcommand &>> filename
syntax error near unexpected token `>'
You have to use:
yourcommand 1>> filename 2>&1
-
10`&>>` seems to work on BASH 4: `$ echo $BASH_VERSION 4.1.5(1)-release $ (echo to stdout; echo to stderr > /dev/stderr) &>> /dev/null` – user272735 May 26 '11 at 04:39
Short answer: Command >filename 2>&1
or Command &>filename
Explanation:
Consider the following code which prints the word "stdout" to stdout and the word "stderror" to stderror.
$ (echo "stdout"; echo "stderror" >&2)
stdout
stderror
Note that the '&' operator tells bash that 2 is a file descriptor (which points to the stderr) and not a file name. If we left out the '&', this command would print stdout
to stdout, and create a file named "2" and write stderror
there.
By experimenting with the code above, you can see for yourself exactly how redirection operators work. For instance, by changing which file which of the two descriptors 1,2
, is redirected to /dev/null
the following two lines of code delete everything from the stdout, and everything from stderror respectively (printing what remains).
$ (echo "stdout"; echo "stderror" >&2) 1>/dev/null
stderror
$ (echo "stdout"; echo "stderror" >&2) 2>/dev/null
stdout
Now, we can explain why the solution why the following code produces no output:
(echo "stdout"; echo "stderror" >&2) >/dev/null 2>&1
To truly understand this, I highly recommend you read this webpage on file descriptor tables. Assuming you have done that reading, we can proceed. Note that Bash processes left to right; thus Bash sees >/dev/null
first (which is the same as 1>/dev/null
), and sets the file descriptor 1 to point to /dev/null instead of the stdout. Having done this, Bash then moves rightwards and sees 2>&1
. This sets the file descriptor 2 to point to the same file as file descriptor 1 (and not to file descriptor 1 itself!!!! (see this resource on pointers for more info)) . Since file descriptor 1 points to /dev/null, and file descriptor 2 points to the same file as file descriptor 1, file descriptor 2 now also points to /dev/null. Thus both file descriptors point to /dev/null, and this is why no output is rendered.
To test if you really understand the concept, try to guess the output when we switch the redirection order:
(echo "stdout"; echo "stderror" >&2) 2>&1 >/dev/null
stderror
The reasoning here is that evaluating from left to right, Bash sees 2>&1, and thus sets the file descriptor 2 to point to the same place as file descriptor 1, ie stdout. It then sets file descriptor 1 (remember that >/dev/null = 1>/dev/null) to point to >/dev/null, thus deleting everything which would usually be send to to the standard out. Thus all we are left with was that which was not send to stdout in the subshell (the code in the parentheses)- i.e. "stderror".
The interesting thing to note there is that even though 1 is just a pointer to the stdout, redirecting pointer 2 to 1 via 2>&1
does NOT form a chain of pointers 2 -> 1 -> stdout. If it did, as a result of redirecting 1 to /dev/null, the code 2>&1 >/dev/null
would give the pointer chain 2 -> 1 -> /dev/null, and thus the code would generate nothing, in contrast to what we saw above.
Finally, I'd note that there is a simpler way to do this:
From section 3.6.4 here, we see that we can use the operator &>
to redirect both stdout and stderr. Thus, to redirect both the stderr and stdout output of any command to \dev\null
(which deletes the output), we simply type
$ command &> /dev/null
or in case of my example:
$ (echo "stdout"; echo "stderror" >&2) &>/dev/null
Key takeaways:
- File descriptors behave like pointers (although file descriptors are not the same as file pointers)
- Redirecting a file descriptor "a" to a file descriptor "b" which points to file "f", causes file descriptor "a" to point to the same place as file descriptor b - file "f". It DOES NOT form a chain of pointers a -> b -> f
- Because of the above, order matters,
2>&1 >/dev/null
is !=>/dev/null 2>&1
. One generates output and the other does not!
Finally have a look at these great resources:
Bash Documentation on Redirection, An Explanation of File Descriptor Tables, Introduction to Pointers

- 1,182
- 1
- 12
- 22
-
File descriptors (0, 1, 2) are just offsets into a table. When 2>&1 is used the effect is slot FD[2] = dup(1) so wherever FD[1] was pointing FD[2] now points to. When you change FD[1] to point to /dev/null, then FD[1] is changed but it doesn't change the FD[2] slot (which points to stdout). I use the term dup() because that is the system call that is used to duplicate the file descriptor. – PatS Mar 03 '18 at 04:46
LOG_FACILITY="local7.notice"
LOG_TOPIC="my-prog-name"
LOG_TOPIC_OUT="$LOG_TOPIC-out[$$]"
LOG_TOPIC_ERR="$LOG_TOPIC-err[$$]"
exec 3>&1 > >(tee -a /dev/fd/3 | logger -p "$LOG_FACILITY" -t "$LOG_TOPIC_OUT" )
exec 2> >(logger -p "$LOG_FACILITY" -t "$LOG_TOPIC_ERR" )
It is related: Writing standard output and standard error to syslog.
It almost works, but not from xinetd ;(

- 30,738
- 21
- 105
- 131
-
I'm guessing it doesn't work because of "/dev/fd/3 Permission denied". Changing to >&3 may help. – quizac Sep 23 '14 at 17:40
I wanted a solution to have the output from stdout plus stderr written into a log file and stderr still on console. So I needed to duplicate the stderr output via tee.
This is the solution I found:
command 3>&1 1>&2 2>&3 1>>logfile | tee -a logfile
- First swap stderr and stdout
- then append the stdout to the log file
- pipe stderr to tee and append it also to the log file

- 2,830
- 1
- 32
- 37
-
BTW, this didn't work for me (logfile is empty). |tee has no effect. Instead I got it working using https://stackoverflow.com/questions/692000/how-do-i-write-stderr-to-a-file-while-using-tee-with-a-pipe – Yaroslav Bulatov Sep 23 '18 at 00:50
For the situation when "piping" is necessary, you can use |&
.
For example:
echo -ne "15\n100\n" | sort -c |& tee >sort_result.txt
or
TIMEFORMAT=%R;for i in `seq 1 20` ; do time kubectl get pods | grep node >>js.log ; done |& sort -h
These Bash-based solutions can pipe standard output and standard error separately (from standard error of "sort -c", or from standard error to "sort -h").

- 30,738
- 21
- 105
- 131

- 509
- 5
- 5
-
1This is actually very important, and less known. Good call. You might also want explain what the `&` does, when used in combination with the pipe. – not2qubit Apr 30 '21 at 20:09
Adding to what Fernando Fabreti did, I changed the functions slightly and removed the &-
closing and it worked for me.
function saveStandardOutputs {
if [ "$OUTPUTS_REDIRECTED" == "false" ]; then
exec 3>&1
exec 4>&2
trap restoreStandardOutputs EXIT
else
echo "[ERROR]: ${FUNCNAME[0]}: Cannot save standard outputs because they have been redirected before"
exit 1;
fi
}
# Parameters: $1 => logfile to write to
function redirectOutputsToLogfile {
if [ "$OUTPUTS_REDIRECTED" == "false" ]; then
LOGFILE=$1
if [ -z "$LOGFILE" ]; then
echo "[ERROR]: ${FUNCNAME[0]}: logfile empty [$LOGFILE]"
fi
if [ ! -f $LOGFILE ]; then
touch $LOGFILE
fi
if [ ! -f $LOGFILE ]; then
echo "[ERROR]: ${FUNCNAME[0]}: creating logfile [$LOGFILE]"
exit 1
fi
saveStandardOutputs
exec 1>>${LOGFILE}
exec 2>&1
OUTPUTS_REDIRECTED="true"
else
echo "[ERROR]: ${FUNCNAME[0]}: Cannot redirect standard outputs because they have been redirected before"
exit 1;
fi
}
function restoreStandardOutputs {
if [ "$OUTPUTS_REDIRECTED" == "true" ]; then
exec 1>&3 #restore stdout
exec 2>&4 #restore stderr
OUTPUTS_REDIRECTED="false"
fi
}
LOGFILE_NAME="tmp/one.log"
OUTPUTS_REDIRECTED="false"
echo "this goes to standard output"
redirectOutputsToLogfile $LOGFILE_NAME
echo "this goes to logfile"
echo "${LOGFILE_NAME}"
restoreStandardOutputs
echo "After restore this goes to standard output"

- 30,738
- 21
- 105
- 131

- 1,469
- 13
- 24
In situations when you consider using things like exec 2>&1
, I find it easier to read, if possible, rewriting code using Bash functions like this:
function myfunc(){
[...]
}
myfunc &>mylog.log

- 30,738
- 21
- 105
- 131

- 139
- 1
- 5
The following functions can be used to automate the process of toggling outputs beetwen stdout/stderr and a logfile.
#!/bin/bash
#set -x
# global vars
OUTPUTS_REDIRECTED="false"
LOGFILE=/dev/stdout
# "private" function used by redirect_outputs_to_logfile()
function save_standard_outputs {
if [ "$OUTPUTS_REDIRECTED" == "true" ]; then
echo "[ERROR]: ${FUNCNAME[0]}: Cannot save standard outputs because they have been redirected before"
exit 1;
fi
exec 3>&1
exec 4>&2
trap restore_standard_outputs EXIT
}
# Params: $1 => logfile to write to
function redirect_outputs_to_logfile {
if [ "$OUTPUTS_REDIRECTED" == "true" ]; then
echo "[ERROR]: ${FUNCNAME[0]}: Cannot redirect standard outputs because they have been redirected before"
exit 1;
fi
LOGFILE=$1
if [ -z "$LOGFILE" ]; then
echo "[ERROR]: ${FUNCNAME[0]}: logfile empty [$LOGFILE]"
fi
if [ ! -f $LOGFILE ]; then
touch $LOGFILE
fi
if [ ! -f $LOGFILE ]; then
echo "[ERROR]: ${FUNCNAME[0]}: creating logfile [$LOGFILE]"
exit 1
fi
save_standard_outputs
exec 1>>${LOGFILE%.log}.log
exec 2>&1
OUTPUTS_REDIRECTED="true"
}
# "private" function used by save_standard_outputs()
function restore_standard_outputs {
if [ "$OUTPUTS_REDIRECTED" == "false" ]; then
echo "[ERROR]: ${FUNCNAME[0]}: Cannot restore standard outputs because they have NOT been redirected"
exit 1;
fi
exec 1>&- #closes FD 1 (logfile)
exec 2>&- #closes FD 2 (logfile)
exec 2>&4 #restore stderr
exec 1>&3 #restore stdout
OUTPUTS_REDIRECTED="false"
}
Example of usage inside script:
echo "this goes to stdout"
redirect_outputs_to_logfile /tmp/one.log
echo "this goes to logfile"
restore_standard_outputs
echo "this goes to stdout"

- 4,277
- 3
- 32
- 33
-
when I use your functions and it attempts to restore standard outputs I get echo: write error: Bad file number the redirect works perfectly... the restore doesn't seem to – Thom Schumacher Sep 13 '18 at 23:14
-
in order to get your script to work on I had to comment out the these lines and I changed the order: #exec 1>&- #closes FD 1 (logfile) #exec 2>&- #closes FD 2 (logfile); exec 1>&3 #restore stdout exec 2>&4 #restore stderr – Thom Schumacher Sep 14 '18 at 16:29
-
Sorry to hear that. I don't receive any errors when running in CentOS 7, bash 4.2.46. I have annotated the reference where I got those commands. It's: Ref: http://logan.tw/posts/2016/02/20/open-and-close-files-in-bash/ – Fernando Fabreti Sep 14 '18 at 18:33
-
I'm running these commands on AIX that is probably why. I added a post for the fix I made. – Thom Schumacher Sep 14 '18 at 18:43
For tcsh, I have to use the following command:
command >& file
If using command &> file
, it will give an "Invalid null command" error.

- 30,738
- 21
- 105
- 131

- 25
- 3