59

I'm working with a command line utility that requires passing the name of a file to write output to, e.g.

foo -o output.txt

The only thing it writes to stdout is a message that indicates that it ran successfully. I'd like to be able to pipe everything that is written to output.txt to another command line utility. My motivation is that output.txt will end up being a 40 GB file that I don't need to keep, and I'd rather pipe the streams than work on massive files in a stepwise manner.

Is there any way in this scenario to pipe the real output (i.e. output.txt) to another command? Can I somehow magically pass stdout as the file argument?

tripleee
  • 175,061
  • 34
  • 275
  • 318
Jake
  • 5,379
  • 6
  • 19
  • 19
  • 3
    some versions of unix/linux have access to stdout/err via `/dev/stdout`, etc. – shellter Oct 13 '11 at 15:35
  • 3
    the usual "write to stdout" convention for unixy tools is to use `-` as a filename (i.e. `foo -o -`). In your code, you could simply detect that special filename and use `stdout` for output instead of an `fopen`ed file. Don't print a status in that case, or print it to `stderr` which you can redirect separately. – Mat Oct 13 '11 at 15:39
  • All excellent suggestions with great insights into the different aspects of the problem. I think the named pipe is the way to go, but the two answers are essentially dupes -- which do I mark as the answer? – Jake Oct 13 '11 at 15:46

6 Answers6

69

Solution 1: Using process substitution

The most convenient way of doing this is by using process substitution. In bash the syntax looks as follows:

foo -o >(other_command)

(Note that this is a bashism. There's similar solutions for other shells, but bottom line is that it's not portable.)

Solution 2: Using named pipes explicitly

You can do the above explicitly / manually as follows:

  1. Create a named pipe using the mkfifo command.

    mkfifo my_buf
    
  2. Launch your other command with that file as input

    other_command < my_buf
    
  3. Execute foo and let it write it's output to my_buf

    foo -o my_buf
    

Solution 3: Using /dev/stdout

You can also use the device file /dev/stdout as follows

foo -o /dev/stdout | other_command
aioobe
  • 413,195
  • 112
  • 811
  • 826
53

Named pipes work fine, but you have a nicer, more direct syntax available via bash process substitution that has the added benefit of not using a permanent named pipe that must later be deleted (process substitution uses temporary named pipes behind the scenes):

foo -o >(other command)

Also, should you want to pipe the output to your command and also save the output to a file, you can do this:

foo -o >(tee output.txt) | other command

frankc
  • 11,290
  • 4
  • 32
  • 49
47

For the sake of making stackoverflow happy let me write a long enough sentence because my proposed solution is only 18 characters long instead of the required 30+

foo -o /dev/stdout
Nestor Urquiza
  • 2,821
  • 28
  • 21
  • 1
    The problem with this solution is that it *also* captures the command's normal stdout, and if it is both emitting unwanted stdout *and* writing a wanted file, then the result will contain the unwanted content as well. – ErikE Oct 27 '20 at 16:07
6

You could use the magic of UNIX and create a named pipe :)

  1. Create the pipe

    $ mknod -p mypipe
    
  2. Start the process that reads from the pipe

    $ second-process < mypipe
    
  3. Start the process, that writes into the pipe

    $ foo -o mypipe
    
Toby Speight
  • 27,591
  • 48
  • 66
  • 103
ktf
  • 6,865
  • 1
  • 13
  • 6
1

foo -o <(cat)

if for some reason you don't have permission to write to /dev/stdout

honeyspoon
  • 1,354
  • 1
  • 11
  • 13
1

I use /dev/tty as the output filename, equivalent to using /dev/nul/ when you want to output nothing at all. Then | and you are done.