1945

To redirect standard output to a truncated file in Bash, I know to use:

cmd > file.txt

To redirect standard output in Bash, appending to a file, I know to use:

cmd >> file.txt

To redirect both standard output and standard error to a truncated file, I know to use:

cmd &> file.txt

How do I redirect both standard output and standard error appending to a file? cmd &>> file.txt did not work for me.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
flybywire
  • 261,858
  • 191
  • 397
  • 503
  • 60
    I would like to note that &>outfile is a Bash (and others) specific code and not portable. The way to go portable (similar to the appending answers) always was and still is >outfile 2>&1 – TheBonsai May 18 '09 at 04:48
  • 9
    … and ordering of that is important. – Torsten Bronger Jun 19 '20 at 12:08
  • 2
    If you care about the ordering of the content of the two streams, see @ed-morton 's answer to a similar question, [here](https://stackoverflow.com/questions/56406028/redirect-to-a-file-stdout-first-and-then-stderr/56407419#56407419). – Ana Nimbus Mar 23 '22 at 13:16

9 Answers9

2482
cmd >>file.txt 2>&1

Bash executes the redirects from left to right as follows:

  1. >>file.txt: Open file.txt in append mode and redirect stdout there.
  2. 2>&1: Redirect stderr to "where stdout is currently going". In this case, that is a file opened in append mode. In other words, the &1 reuses the file descriptor which stdout currently uses.
Fritz
  • 1,293
  • 15
  • 27
Alex Martelli
  • 854,459
  • 170
  • 1,222
  • 1,395
  • 39
    works great! but is there a way to make sense of this or should I treat this like an atomic bash construct? – flybywire May 18 '09 at 08:15
  • 197
    It's simple redirection, redirection statements are evaluated, as always, from left to right. >>file : Red. STDOUT to file (append mode) (short for 1>>file) 2>&1 : Red. STDERR to "where stdout goes" Note that the interpretion "redirect STDERR to STDOUT" is wrong. – TheBonsai May 18 '09 at 08:55
  • 34
    It says "append output (stdout, file descriptor 1) onto file.txt and send stderr (file descriptor 2) to the same place as fd1". – Dennis Williamson May 18 '09 at 09:07
  • 5
    @TheBonsai however what if I need to redirect STDERR to another file but appending? is this possible? – arod Jun 02 '13 at 22:26
  • 47
    if you do `cmd >>file1 2>>file2` it should achieve what you want. – Woodrow Douglass Sep 06 '13 at 21:24
  • if pipe is what you want, try `cmd1 |& cmd2`. It works in bash v4+. – jpbochi Jan 16 '20 at 17:11
  • 4
    Just to emphasize this to users like myself: the order of the redirect matters, 2>&1 should be after >>file.txt. – CodeBrew Jul 23 '20 at 18:27
  • 2
    I have file1.txt and file2.txt in currect directory. When I run: 1. ls file{1..5}.txt 1>res.txt 2>&1: it gives correct output to res.txt But when I write 2. ls file{1..5}.txt 1>res.txt 2>res.txt: now it doesn't give correct output to res.txt, here in res.txt some output is omitted compare to what it should've been if we weren't redirect any stream. Why ? – tusharRawat Aug 04 '20 at 16:08
  • 2
    Much punctuation makes this syntax difficult. You may be familiar with file descriptor `1` = `stdout` and `2` = `stderr`. `stdout` is the default (over `stderr`) and therefore comes first. It is necessary to put the `2` before the `>` when redirecting `stderr` because otherwise as soon as `>` is encountered it will be interpreted as redirecting `stdout`. Your programming experience may help you remember `&` as a reference character, so `&1` is something like "pointer to 1". Hopefully this breakdown provides some mnemonic hooks that can assist in recollection without making another SO search ;) – NeilG Dec 26 '22 at 05:41
471

There are two ways to do this, depending on your Bash version.

The classic and portable (Bash pre-4) way is:

cmd >> outfile 2>&1

A nonportable way, starting with Bash 4 is

cmd &>> outfile

(analog to &> outfile)

For good coding style, you should

  • decide if portability is a concern (then use the classic way)
  • decide if portability even to Bash pre-4 is a concern (then use the classic way)
  • no matter which syntax you use, don't change it within the same script (confusion!)

If your script already starts with #!/bin/sh (no matter if intended or not), then the Bash 4 solution, and in general any Bash-specific code, is not the way to go.

Also remember that Bash 4 &>> is just shorter syntax — it does not introduce any new functionality or anything like that.

The syntax is (beside other redirection syntax) described in the Bash hackers wiki.

Matthias Braun
  • 32,039
  • 22
  • 142
  • 171
TheBonsai
  • 15,513
  • 4
  • 22
  • 14
  • 12
    I prefer &>> as it's consistent with &> and >>. It's also easier to read 'append output and errors to this file' than 'send errors to output, append output to this file'. Note while Linux generally has a current version of bash, OS X, at the time of writing, still requires bash 4 to manually installed via homebrew etc. – mikemaccana May 20 '13 at 09:30
  • 1
    I like it more because it is shorter and only tweoi places per line, so what would for example zsh make out of "&>>"? – Phillipp Feb 17 '16 at 14:20
  • 2
    Also important to note, that in a cron job, you have to use the pre-4 syntax, even if your system has Bash 4. – hyperknot May 18 '17 at 10:03
  • 10
    @zsero cron doesn't use bash at all... it uses `sh`. You can change the default shell by prepending `SHELL=/bin/bash` to the `crontab -e` file. – Ray Foss Jun 05 '18 at 20:45
115

In Bash you can also explicitly specify your redirects to different files:

cmd >log.out 2>log_error.out

Appending would be:

cmd >>log.out 2>>log_error.out
Aaron R.
  • 3,362
  • 3
  • 17
  • 19
  • 12
    Redirecting two streams to the same file using your first option will cause the first one to write "on top" of the second, overwriting some or all of the contents. Use ***cmd >> log.out 2> log.out*** instead. – Orestis P. Dec 11 '15 at 14:33
  • 6
    Thanks for catching that; you're right, one will clobber the other. However, your command doesn't work either. I think the only way to write to the same file is as has been given before `cmd >log.out 2>&1`. I'm editing my answer to remove the first example. – Aaron R. Dec 11 '15 at 15:36
  • 3
    The reason `cmd > my.log 2> my.log` doesn't work is that the redirects are evaluated from left to right and `> my.log` says "create new file `my.log` replacing existing files and redirect `stdout` to that file" and *after* that has been already done, the `2> my.log` is evaluated and it says "create new file `my.log` replacing existing files and redirect `stderr` to that file". As UNIX allows deleting open files, the stdout is now logged to file that used to be called `my.log` but has since been deleted. Once the last filehandle to that file is closed, the file *contents* will be also deleted. – Mikko Rantalainen Jul 08 '21 at 07:32
  • 2
    On the other hand, `cmd > my.log 2>&1` works because `> my.log` says "create new file `my.log` replacing existing files and redirect `stdout` to that file" and after that has been already done, the `2>&1` says "point file handle 2 to file handle 1". And according to POSIX rules, file handle 1 is always stdout and 2 is always stderr so `stderr` then points to already opened file `my.log` from first redirect. Notice that syntax `>&` doesn't create or modify actual files so there's no need for `>>&`. (If *first* redirect had been `>> my.log` then file had been simply opened in append mode.) – Mikko Rantalainen Jul 08 '21 at 07:40
94

This should work fine:

your_command 2>&1 | tee -a file.txt

It will store all logs in file.txt as well as dump them in the terminal.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Pradeep Goswami
  • 1,785
  • 1
  • 15
  • 24
  • 11
    This is the correct answer if you want to see the output in the terminal, too. However, this was not the question originally asked. – Mikko Rantalainen Apr 15 '20 at 08:04
  • tee with pipe take lot more time than direct redirection. It works but slowly, with more memory used and extrat thread – NeronLeVelu Jun 20 '23 at 11:21
72

In Bash 4 (as well as Z shell (zsh) 4.3.11):

cmd &>> outfile

just out of box.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
A B
  • 2,013
  • 2
  • 21
  • 22
  • 4
    @all: this is a good answer, since it works with bash and is brief, so I've edited to make sure it mentions bash explicitly. – mikemaccana May 20 '13 at 08:47
  • 12
    @mikemaccana: [TheBonsai's answer](http://stackoverflow.com/a/876267/4279) shows bash 4 solution since 2009 – jfs Mar 27 '14 at 17:56
  • 3
    Why does this answer even exist when it's included in TheBonsai's answer? Please consider deleting it. You'll get a [disciplined badge](https://meta.stackexchange.com/questions/7609/what-is-the-purpose-of-the-disciplined-badge). – Dan Dascalescu Jun 14 '21 at 06:23
32

Try this:

You_command 1> output.log  2>&1

Your usage of &> x.file does work in Bash 4. Sorry for that: (

Here comes some additional tips.

0, 1, 2, ..., 9 are file descriptors in bash.

0 stands for standard input, 1 stands for standard output, 2 stands for standard error. 3~9 is spare for any other temporary usage.

Any file descriptor can be redirected to other file descriptor or file by using operator > or >>(append).

Usage: <file_descriptor> > <filename | &file_descriptor>

Please see the reference in Chapter 20. I/O Redirection.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Quintus.Zhou
  • 1,013
  • 10
  • 13
  • 3
    Your example will do something different than the OP asked for: It will redirect the stderr of `You_command` to stdout and the stdout of `You_command` to the file `output.log`. Additionally it will not append to the file but it will overwrite it. – pabouk - Ukraine stay strong May 31 '14 at 12:38
  • 3
    Correct: ***File descriptor*** could be any values which is more than 3 for all other files. – Itachi Dec 25 '14 at 06:46
  • 8
    Your answer shows the most common output redirection error: redirecting STDERR to where STDOUT is currently pointing and only after that redirecting STDOUT to file. This will not cause STDERR to be redirected to the same file. Order of the redirections matters. – Jan Wikholm Jan 04 '15 at 12:51
  • 3
    does it mean, i should firstly redirect STDERROR to STDOUT, then redirect STDOUT to a file. `1 > output.log 2>&1` – Quintus.Zhou Mar 04 '15 at 06:10
  • 3
    @Quintus.Zhou Yup. Your version redirects err to out, and at the same time out to file. – Alex Yaroshevich Mar 08 '15 at 23:22
  • Re *"does work in..."*: Do you mean *"does* ***not*** *work in..."* (the opposite)? – Peter Mortensen Aug 16 '21 at 11:24
19

Another approach:

If using older versions of Bash where &>> isn't available, you also can do:

(cmd 2>&1) >> file.txt

This spawns a subshell, so it's less efficient than the traditional approach of cmd >> file.txt 2>&1, and it consequently won't work for commands that need to modify the current shell (e.g. cd, pushd), but this approach feels more natural and understandable to me:

  1. Redirect standard error to standard output.
  2. Redirect the new standard output by appending to a file.

Also, the parentheses remove any ambiguity of order, especially if you want to pipe standard output and standard error to another command instead.

To avoid starting a subshell, you instead could use curly braces instead of parentheses to create a group command:

{ cmd 2>&1; } >> file.txt

(Note that a semicolon (or newline) is required to terminate the group command.)

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
jamesdlin
  • 81,374
  • 13
  • 159
  • 204
  • 2
    This implementation causes one extra process for system to run. Using syntax `cmd >> file 2>&1` works in all shells and does not need an extra process to run. – Mikko Rantalainen Apr 15 '20 at 08:06
  • 2
    @MikkoRantalainen I already explained that it spawns a subshell and is less efficient. The point of this approach is that if efficiency isn't a big deal (and it rarely is), this way is easier to remember and harder to get wrong. – jamesdlin Apr 15 '20 at 09:21
  • 3
    @MikkoRantalainen I've updated my answer with a variant that avoids spawning a subshell. – jamesdlin Jun 28 '20 at 04:13
  • If you truly cannot remember if the syntax is `cmd >> file 2>&1` or `cmd 2>&1 >> file` I think it would be easier to do `cmd 2>&1 | cat >> file` instead of using braces or parenthesis. For me, once you understand that the implementation of `cmd >> file 2>&1` is literally "redirect STDOUT to `file`" followed by "redirect STDERR to whatever *file* STDOUT is currently pointing to" (which is obviously `file` after the first redirect), it's immediately obvious which order you put the redirects. UNIX does not support redirecting to a stream, only to *file* descriptor pointed by a stream. – Mikko Rantalainen Jun 29 '20 at 06:58
15

Redirections from script himself

You could plan redirections from the script itself:

#!/bin/bash

exec 1>>logfile.txt
exec 2>&1

/bin/ls -ld /tmp /tnt

Running this will create/append logfile.txt, containing:

/bin/ls: cannot access '/tnt': No such file or directory
drwxrwxrwt 2 root root 4096 Apr  5 11:20 /tmp

Or

#!/bin/bash

exec 1>>logfile.txt
exec 2>>errfile.txt

/bin/ls -ld /tmp /tnt

While create or append standard output to logfile.txt and create or append errors output to errfile.txt.

Log to many different files

You could create two different logfiles, appending to one overall log and recreating another last log:

#!/bin/bash

if [ -e lastlog.txt ] ;then
    mv -f lastlog.txt lastlog.old
fi
exec 1> >(tee -a overall.log /dev/tty >lastlog.txt)
exec 2>&1

ls -ld /tnt /tmp

Running this script will

  • if lastlog.txt already exist, rename them to lastlog.old (overwriting lastlog.old if they exist).
  • create a new lastlog.txt.
  • append everything to overall.log
  • output everything to the terminal.

Simple and combined logs

#!/bin/bash

[ -e lastlog.txt ] && mv -f lastlog.txt lastlog.old
[ -e lasterr.txt ] && mv -f lasterr.txt lasterr.old

exec 1> >(tee -a overall.log combined.log /dev/tty >lastlog.txt)
exec 2> >(tee -a overall.err combined.log /dev/tty >lasterr.txt)

ls -ld /tnt /tmp

So you have

  • lastlog.txt last run log file
  • lasterr.txt last run error file
  • lastlog.old previous run log file
  • lasterr.old previous run error file
  • overall.log appended overall log file
  • overall.err appended overall error file
  • combined.log appended overall error and log combined file.
  • still output to the terminal

And for interactive session, use stdbuf:

Regarding Fonic' comment and after some test, I have to agree: with tee, stdbuf is useless. But ...

If you plan to use this in *interactive* shell, you must tell `tee` to not buffering his input/output:
# Source this to multi-log your session
[ -e lasterr.txt ] && mv -f lasterr.txt lasterr.old
[ -e lastlog.txt ] && mv -f lastlog.txt lastlog.old
exec 2> >(exec stdbuf -i0 -o0 tee -a overall.err combined.log /dev/tty >lasterr.txt)
exec 1> >(exec stdbuf -i0 -o0 tee -a overall.log combined.log /dev/tty >lastlog.txt)

Once sourced this, you could try:

ls -ld /tnt /tmp

More complex sample

From my 3 remarks about how to Convert Unix timestamp to a date string

I've used more complex command to parse and reassemble squid's log in real time: As each line begin by an UNIX EPOCH with milliseconds, I split the line on 1st dot, add @ symbol before EPOCH SECONDS to pass them to date -f - +%F\ %T then reassemble date's output and the rest of line with a dot by using paste -d ..

exec {datesfd}<> <(:)
tail -f /var/log/squid/access.log |
    tee >(
        exec sed -u 's/^\([0-9]\+\)\..*/@\1/'|
            stdbuf -o0 date -f - +%F\ %T >&$datesfd
    ) |
        sed -u 's/^[0-9]\+\.//' |
        paste -d . /dev/fd/$datesfd -

With date, stdbuf was required...

Some explanations about exec and stdbuf commands:

  • Running forks by using $(...) or <(...) is done by running subshell wich will execute binaries in another subshell (subsubshell). The exec command tell shell that there are not further command in script to be run, so binary (stdbuf ... tee) will be executed as replacement process, at same level (no need to reserve more memory for running another sub-process).

    From bash's man page (man -P'less +/^\ *exec\ ' bash):

        exec [-cl] [-a name] [command [arguments]]
               If  command  is  specified,  it  replaces the
               shell.  No new process is created....
    

    This is not really needed, but reduce system footprint.

  • From stdbuf's man page:

    NAME
           stdbuf  -  Run COMMAND, with modified buffering
           operations for its standard streams.
    

    This will tell system to use unbuffered I/O for tee command. So all outputs will be updated immediately, when some input are coming.

F. Hauri - Give Up GitHub
  • 64,122
  • 17
  • 116
  • 137
  • 1
    See further: [Pipe output to two different commands](https://stackoverflow.com/a/13108173/1765658), then follow link to *more detailled answer on this duplicate* in [comment](https://stackoverflow.com/questions/13107783/pipe-output-to-two-different-commands/13108173#comment47763415_13108173). – F. Hauri - Give Up GitHub Nov 11 '21 at 18:14
  • 3
    Could you explain how `exec stdbuf` helps in this context? The man page of `stdbuf` states that if does not have any effect on `tee`? – Fonic Sep 05 '22 at 08:25
  • 3
    @Fonic ***Some explanations about exec and stdbuf commands***, published! – F. Hauri - Give Up GitHub Sep 05 '22 at 08:51
  • 1
    Thanks, but still: the man page of `stdbuf` states that `tee` won't be affected by it, so what's the point? Quote: `NOTE: If COMMAND adjusts the buffering of its standard streams ('tee' does for example) then that will override corresponding changes by 'stdbuf'` – Fonic Sep 05 '22 at 20:12
  • 1
    @Fonic Sorry for delay... I haved some tests to do... Answer edited! (Your comment is mentioned) – F. Hauri - Give Up GitHub Jan 21 '23 at 16:01
0

This is terribly good!

Redirect the output to log file and stdout within the current script.

Refer to https://stackoverflow.com/a/314678/5449346, very simple and clean, it redirects all the script's output to the log file and stdout, including the scripts called in the script:

exec > >(tee -a "logs/logdata.log") 2>&1 prints the logs on the screen as well as writes them into a file – shriyog Feb 2, 2017 at 9:20

Typically we would place one of these at or near the top of the script. Scripts that parse their command lines would do the redirection after parsing.

Send stdout to a file

exec > file with stderr

exec > file
exec 2>&1 append both stdout and stderr to file

exec >> file exec 2>&1 As Jonathan Leffler mentioned in his comment:

exec has two separate jobs. The first one is to replace the currently executing shell (script) with a new program. The other is changing the I/O redirections in the current shell. This is distinguished by having no argument to exec.

tom
  • 317
  • 3
  • 9