14

I need a way to make a process keep a certain file open forever. Here's an example of what I have so far:

sleep 1000 > myfile &

It works for a thousand seconds, but really don't want to make some complicated sleep/loop statement. This post suggested that cat is the same thing as sleep for infinite. So I tried this:

cat > myfile &

It almost looks like a mistake doesn't it? It seemed to work from the command line, but in a script the file connection did not stay open. Any other ideas?

Community
  • 1
  • 1
User1
  • 39,458
  • 69
  • 187
  • 265
  • Not quite sure why you need to keep a file open forever, but doing nothing? Are you just for keeping a folder not to be deleted? – Robin Hsu Nov 17 '14 at 03:54

4 Answers4

25

Rather than using a background process, you can also just use bash to open one of its file descriptors:

exec 5>myfile 

(The special use of exec here allows changing the current file descriptor redirections - see man bash for details). This will open file descriptor 5 to "myfile" (use >> if you don't want to empty the file).

You can later close the file again with:

exec 5>&-

(One possible downside of this is that the FD gets inherited by every program that the shell runs in the meantime. Mostly this is harmless - e.g. your greps and seds will generally ignore the extra FD - but it could be annoying in some cases, especially if you spawn any processes that stay around (because they will then keep the FD open).

Note: If you are using a newer version of bash (>4.1) you can use a slightly different syntax:

exec {fd}>myfile

This allocates a new file descriptor, and puts it in the variable fd. This can help ensure that scripts don't accidentally overwrite each other's file descriptors. To close the file later, use

exec {fd}>&-
psmears
  • 26,070
  • 4
  • 40
  • 48
  • This is a wonderful and really underappreciated feature - thanks ! – FrankH. Jul 17 '13 at 17:14
  • 1
    This is really fantastic. I can run `exec 3>>/home/me/myfile` and then redirect output from several files to fd 3 (e.g. `grep thing file >> 3`) and add many things to the same file without having to type the path each time. When I'm done I can close the file. I love it. Thanks man. – vastlysuperiorman Jan 27 '15 at 19:16
  • 1
    @vastlysuperiorman: You're welcome! I think your grep might want to be `grep thing file >&3`, but yes, the idea is sound :) – psmears Jan 27 '15 at 23:36
14

The reason that cat>myfile& works is because it re-directs standard input into a file.

if you launch it with an ampersand (in background), it won't get ANY input, including end-of-file, which means it will forever wait and print nothing to the output file.

You can get an equivalent effect, except WITHOUT dependency on standard input (the latter is what makes it not work in your script), with this command:

tail -f /dev/null > myfile &
DVK
  • 126,886
  • 32
  • 213
  • 327
  • That worked! I still don't understand why the `cat` worked on terminal but not in a script. On the terminal, I get a notification of "Stopped" and the script doesn't not give me this. That must be part of the reason. – User1 Aug 11 '10 at 19:42
  • @User1 - I can speculate on why cat didn't work but to be honest I don't know 100% for sure. Some sort of weird SDTIN and tty interaction, probably. Feel free to ask that as a separate question if you're really interested, I'm sure someone will know the asnwer with certainty and I'd be curious too - no time to investigate mysefl. – DVK Aug 11 '10 at 19:53
  • This helped me test `lsof`, to check whether a file is currently opened by another process. Thanks! – cangers Nov 20 '19 at 17:01
  • same as @cangers, only I wanted to test `fuser` instead of `lsof` – user1593842 Dec 18 '20 at 12:17
2

On the cat > myfile & issue running in terminal vs not running as part of a script: In a non-interactive shell the stdin of a backgrounded command & gets implicitly redirected from /dev/null.

So, cat > myfile & in a script actually gets translated into cat </dev/null > myfile, which terminates cat immediately.

See the POSIX standard on the Shell Command Language & Asynchronous Lists:

The standard input for an asynchronous list, before any explicit redirections are 
performed, shall be considered to be assigned to a file that has the same 
properties as /dev/null. If it is an interactive shell, this need not happen. 
In all cases, explicit redirection of standard input shall override this activity.

# some tests
sh -c 'sleep 10 & lsof -p ${!}'
sh -c 'sleep 10 0<&0 & lsof -p ${!}'
sh -ic 'sleep 10 & lsof -p ${!}'


# in a script
-  cat > myfile &
+  cat 0<&0 > myfile &
ralfw
  • 31
  • 1
1
tail -f myfile

This 'follows' the file, and outputs any changes to the file. If you don't want to see the output of tail, redirect output to /dev/null or something:

tail -f myfile > /dev/null

You may want to use the --retry option, depending on your specific case. See man tail for more information.

strager
  • 88,763
  • 26
  • 134
  • 176
  • This won't do what he wants, which is keep the file open for writing. It will merely read from it. – DVK Aug 11 '10 at 19:38
  • @DVK, He said 'keep the file open', not 'for writing'. – strager Aug 11 '10 at 23:12
  • No, but the `> myfile` implies that he meant writing. – Dennis Williamson Aug 12 '10 at 04:39
  • Over 7 years late, but worth clarification: `tail -f` does not keep the file open for reading, either. Per tail(1): `-s, --sleep-interval=N // with -f, sleep for approximately N seconds (default 1.0) between iterations` – Rich Jan 09 '18 at 21:02