4

Possible Duplicate:
Preventing multiple process instances on Linux

I have multi-threaded application which can be run as a deamon process or one time with input parameters.

I want to ensure that if the application is running as a deamon process then, user should not be allowed to run this again.

EDIT:After you all suggested to go for flocks, I tried it and put it in server. I know have weird problem, when the servers are bounced, they delete all the files, including lock file :(. How now ?

Community
  • 1
  • 1
dicaprio
  • 713
  • 2
  • 8
  • 25
  • 4
    On Linux this is done via a `lockfile`. – user703016 May 25 '12 at 07:32
  • @aix I just realised that and made a edit.Thanks anyway. – dicaprio May 25 '12 at 07:33
  • 1
    http://stackoverflow.com/questions/2964391/preventing-multiple-process-instances-on-linux – NPE May 25 '12 at 07:34
  • @Cicada good point, so it is like i have write a file in the current directory when the process is started and should'nt allow the application to run if the file already exist ? – dicaprio May 25 '12 at 07:35
  • Is it actually a single process with multiple threads, or is it multiple cooperating processes? The answer depends on that. – tbert May 25 '12 at 07:36
  • @Cicada the application can be run from any directory, since the /bin is added to the PATH variable. which directory to check for the lock file? – dicaprio May 25 '12 at 07:37
  • @tbert : it is single process with multiple threads. – dicaprio May 25 '12 at 07:38
  • @Cicada Depends. Is it a per-user application? Then `~`. Else I'd go for `/tmp` (or `/var/run` maybe). – user703016 May 25 '12 at 07:38
  • in that case the user should have permission to that directory , if he doesnt have permission then I should not start the application? – dicaprio May 25 '12 at 07:43
  • Also this raise another question in mind, what if the application terminates abruptly before deleting the file? In such, the application wouldn't start even when no application is running. – dicaprio May 25 '12 at 07:50
  • @dicaprio apparently this got re-edited after I answered (or I just misread); in this case, a lock file is the answer. – tbert May 25 '12 at 07:50

3 Answers3

6

The easiest way is to bind to a port (could be unix domain, in a "private" directory) Only one process can bind to a port, so if the port is bound, the process is running. If the process exits, the kernel automatically closes the filedescriptor. It does cost your process a (unused?) filedescriptor. Normally a daemon process would need some listen socket anyway.

wildplasser
  • 43,142
  • 8
  • 66
  • 109
3

You can try using file locks. Upon starting the process, you can open a file, lock it, and check for a value (e.g. size of file). If it's not desired value, the process can exit. If desired value, change the file to an undesired value.

R.D.
  • 2,471
  • 2
  • 15
  • 21
  • Cant I just check if the file exist ? The process which created the file will delete it when it terminates. This raise another question in mind, what if interupt occur? How to handle it? – dicaprio May 25 '12 at 07:41
  • Yeah just checking existence of file is probably enough. You can create file using [tmpfile](http://linux.die.net/man/3/tmpfile). If your program crashes, the file automatically deletes. – R.D. May 25 '12 at 07:46
  • Actually, thinking of it, I don't think opening is enough. If two process call open() at the same time, race conditions might occur inside the kernel. It's safer not to assume. – R.D. May 25 '12 at 07:54
  • tmpfile, I didnt know this before :( , Thank you I learnt something. – dicaprio May 25 '12 at 07:56
  • Actually I'm an idiot. If you make the file using tmpfile, you won't know the file name, so each time tmpfile is called it'll create a new file. I don't think there's a solution to this problem if your process crashes. Firefox and git routinely run file lock problems. – R.D. May 25 '12 at 08:04
  • Files created by `tmpfile` usually *do not have a name*. So it's worse than just not knowing the name... – R.. GitHub STOP HELPING ICE May 25 '12 at 14:31
  • Also, in general using the existence of a file as a lock is bad because it will not be removed if the process is killed. – R.. GitHub STOP HELPING ICE May 25 '12 at 14:34
  • @R.. your point is correct, now I'm facing a wired problem after I implemented lockfile. I notice that some process when they are bounced they run "rm -rf" and this file get deleted. – dicaprio May 25 '12 at 19:54
  • Yes, lock files are a very bad approach. – R.. GitHub STOP HELPING ICE May 25 '12 at 22:45
2

I implemented similar thing by using shell scripts to start and stop the daemon.

In the start script before the exe call look if this exe is still running. If it finds it is still running then new process is not started.

Vaibhav
  • 5,749
  • 3
  • 22
  • 18
  • Thanks this is definitely a good idea. I first see if we can implement it in the process itself, if not this idea works too. – dicaprio May 25 '12 at 08:00
  • 1
    This is not a good idea. It's error-prone (searching running processes is not robust and can give false positives) and race-prone (it will allow two daemons started at almost the same time to both run). – R.. GitHub STOP HELPING ICE May 25 '12 at 14:33
  • I agree it is not the most robust way. But it suited to my requirements as only one user had the permissions to start the daemon. – Vaibhav May 26 '12 at 05:04