8

I wanted to quickly implement some sort of locking in perl program on linux, which would be shareable between different processes.

So I used mkdir as an atomic operation, which returns 1 if the directory doesn't exist and 0 if it does. I remove the directory right after the critical section.

Now, it was pointed to me that it's not a good practice in general (independently on the language). I think it's quite OK, but I would like to ask your opinion.

edit: to show an example, my code looked something like this:

while (!mkdir "lock_dir") {wait some time}
critical section
rmdir "lock_dir"
Karel Bílek
  • 36,467
  • 31
  • 94
  • 149

2 Answers2

6

IMHO this is a very bad practice. What if the perl script which created the lock directory somehow got killed during the critical section? Another perl script waiting for the lock dir to be removed will wait forever, because it won't get removed by the script which originally created it. To use safe locking, use flock() on a lock file (see perldoc -f flock).

Friek
  • 1,533
  • 11
  • 13
3

This is fine until an unexpected failure (e.g. program crash, power failure) happens while the directory exists.

After this, the program will never run because the lock is locked forever (assuming the directory is on a persistent filesystem).

Normally I'd use flock with LOCK_EXCL instead.

Open a file for reading+writing, creating it if it doesn't exist. Then take the exclusive lock, if that fails (if you use LOCK_NB) then some other process has it locked.

After you've got the lock, you need to keep the file open.

The advantage of this approach is, if the process dies unexpected (for example, crash, is killed or the machine fails), the lock is automatically released.

MarkR
  • 62,604
  • 14
  • 116
  • 151
  • 1
    Yeah, so you can write a pidfile into the lockdir so the next guy can figure out what is up, or have some less complicated crowbar logic. It's only rocket science. – Never Sleep Again Apr 06 '17 at 16:21