2

I want to redirect stdout and stderr to a file while preserving the order of output but then also show stderr to screen. I see a lot of questions discuss it:

But none of them seem to do what I want or preserve the order.

My test script:

#!/bin/bash

echo "1: good" >&1
echo "2: error" >&2
echo "3: error" >&2
echo "4: good" >&1
echo "5: error" >&2
echo "6: good" >&1
echo "7: error" >&2
echo "8: good" >&1
echo "9: error" >&2

What I have so far:

$ ./test.sh 2>&1 > output.log
IMTheNachoMan
  • 5,343
  • 5
  • 40
  • 89
  • Do none of the answers describe why what you want is impossible (to do with 100% reliability) without syscall-level tracing? And I'm pretty sure I wrote one that describes how to do that tracing. – Charles Duffy Jan 21 '19 at 00:03
  • Ahh, here it is: [Separately redirecting and recombining stderr & stdout without losing ordering](https://stackoverflow.com/questions/45760692/separately-redirecting-and-recombining-stderr-stdout-without-losing-ordering). – Charles Duffy Jan 21 '19 at 00:04
  • BTW, the accepted answer on [this related question over at UNIX & Linux SE](https://unix.stackexchange.com/a/418745/3113), and the Q&A in its associated comment thread, does a good job of explaining *why* what you're asking for is impossible relying only on standard UNIX semantics without modifying the program generating the output. – Charles Duffy Jan 21 '19 at 18:53

1 Answers1

1

Standard UNIX semantics do not permit this; it cannot be reliably done using only facilities built into bash or specified in POSIX.

Writes only have a guaranteed order in relation to each other when they happen to the same destination. As soon as you redirect stdout and stderr to different destinations, the operating system no longer offers guarantees around ordering of writes performed to them; doubly when those writes are to a separate process (such as tee) which then must read content and perform its own further writes, subject to OS-level scheduling.

It may be possible to use syscall-level tracing facilities provided by (or available for) your operating system to generate an absolute ordering of system calls, and thus to generate definitively-ordered output; an example of this is given in the answer to Separately redirecting and recombining stderr/stdout without losing ordering.

Using bash itself, however, you can only control where file descriptors of a subprocess are connected, and not how content written to those descriptors is buffered and flushed, so there simply isn't enough control to make what you're asking for possible without writes being subject to reordering.

Charles Duffy
  • 280,126
  • 43
  • 390
  • 441
  • So does that mean it's not even possible to have `stderr` and `stdout` go to one file, in order, and have `stderr` also go to a separate file? – IMTheNachoMan Jan 21 '19 at 00:28
  • Basically I have a script that is rsyncing some files and I want to log all the output, stderr and stdout, in order to a file but I also want to see just the stderr somewhere. I figure I can either send the stderr to the screen or to another file. – IMTheNachoMan Jan 21 '19 at 00:49
  • As soon as stdout and stderr aren't copies of the same FD, you lose any guarantee that alternating writes will retain their ordering, because the OS-level guarantees about order *only apply to subsequent writes to the same FD*. But to make them be directed to different destinations, they *must* be different descriptors, so you end up with hacks such as that behind the link. – Charles Duffy Jan 21 '19 at 00:57
  • That said, rsync has options to make its logs be in a very clean and recognizable format, easy to distinguish from its stderr. And I'm not sure that you really *need* to keep the ordering guarantees for your use case. – Charles Duffy Jan 21 '19 at 00:58
  • That is what I am doing now. Including a custom string in `--out-format` that I can then later grep out. Thanks. The only issue is that I have to wait for the `rsync` to finish before I can cat/grep for the errors. I was hoping for a way to do both at the same time -- like showing errors live while the `rsync` is in process. – IMTheNachoMan Jan 21 '19 at 22:07
  • That's *entirely* doable, so long as you don't really need to retain the ordering. `rsync ... > >(function_that_parses_stdout) 2> >(function_that_parses_stderr)` -- if your functions write content through to stdout and stderr respectively (and whatever additional logs you want to keep) it'll still be presented, you just don't have guarantees that lines won't lose their ordering between the two streams. (That pattern isn't ideal if you want to set variables in the parent process based on results, but an answer that works better for that purpose is easy enough; just doesn't fit into a comment). – Charles Duffy Jan 21 '19 at 22:25