With the &
in the mode >&
in the call
open STDERR, ">&STDOUT"; # or: open STDERR, ">&", \*STDOUT
the first given filehandle is made a copy of the second one. See open, and see man 2 dup2
since this goes via dup2
syscall. The notation follows the shell's I/O redirection.
Since here the first filehandle exists (STDERR
)† it is first closed.
The effect is that prints to STDERR
will go to where STDOUT
was going before this was done, with the side effect of the original STDERR
being closed.
This is legit and does not result in errors but is not a good way to redirect STDERR
in general -- after that we cannot restore STDERR
any more. See open for how to redirect STDERR
.
The rest of the comment clearly refers to a situation where backticks (see qx), which redirect STDOUT
of the executed command(s) to the program, are used after that open
call. All this seems to refer to an idea of redirecting STDERR
to STDOUT
in this way.
Alas, the STDERR
, made by that open
call to go where STDOUT
was going, doesn't get redirected by the backticks and thus still goes "there." In my case prints to STDERR
wind up on the terminal as I see the warning (ls: cannot access...
) with
perl -we'open STDERR, ">&STDOUT"; $out = qx(ls no_such)'
(unlike with perl -we'$out = qx(ls no_such 2>&1)'
). Explicit prints to STDERR
also go to the terminal as STDOUT
(add such prints and redirect output to a file to see).
This may be expected since &
made a copy of the filehandle, so the "new" one (the former STDERR
) still goes where STDOUT
was going, that is to the terminal. What is of course unintended in this case and thus an error.
† Every program in UNIX gets connected to standard streams stdin
, stdout
, and stderr
, with file descriptors 0
, 1
, and 2
respectively. In a Perl program we then get ready filehandles for these, like the STDERR
(for fd 2).
Some generally useful posts on manipulations of file descriptors in the shell: