0

I often find myself stringing together a series of shell commands, ultimately with the goal to replace the contents of a file. However, when using > it opens the original file for writing, so you lose all the contents.

For lack of a better term, is there a "lazy evaluation" version of > that will wait until all the previous commands have been executed before before opening the file for writing?

Currently I'm using:

somecommand file.txt | ... | ... > tmp.txt && rm file.txt && mv tmp.txt file.txt

Which is quite ugly.

Scott Ritchie
  • 10,293
  • 3
  • 28
  • 64

1 Answers1

1

sponge will help here:

(Quoting from the manpage)

NAME sponge - soak up standard input and write to a file

SYNOPSIS sed '...' file | grep '...' | sponge file

DESCRIPTION sponge reads standard input and writes it out to the specified file. Unlike a shell redirect, sponge soaks up all its input before opening the output file. This allows constructing pipelines that read from and write to the same file.

  It also creates the output file atomically by renaming a temp file into
   place,  and  preserves the permissions of the output file if it already
   exists.  If the output file is a special file or symlink, the data will
   be written to it.

  If no output file is specified, sponge outputs to stdout.

See also: Can I read and write to the same file in Linux without overwriting it? on unix.SE

Community
  • 1
  • 1
TimWolla
  • 31,849
  • 8
  • 63
  • 96