1

I am using lockfile command in linux to manage access to a special file.

When my principal script crashes for some reason, i finish having hanging locks that prevent any new launch of the principal script and bother heavily its execution.

Is there a way to stock the PID of my lockfile processes so i can track them and make proper clean-up before relaunching my principal script.

Hope i was clear enough...

Debugger
  • 9,130
  • 17
  • 45
  • 53

1 Answers1

0

This is a fragile mechanism. I prefer to use real file locks, so when the process that owns them dies, the O/S will release the lock automatically. Easy to do in perl (using the flock function), but i don't know if it's possible in Bash.

More to the point, i suppose you could use the lockfile itself to hold the PID of the script holding the lock, right?

(I don't do shell scripting much... i think the code below is mostly right, but use at your own risk. There are race conditions.)

while [[ lockfile -! -r 0 lock.file ]]
do
    kill -0 `cat lock.file`
    if  [[ $? -ne 0 ]]
    then 
        # process doesn't exist anymore
        echo $$ >lock.file
        # do something important
        rm -f lock.file
        break
    fi
    sleep 5
done    

Or, how about this:

while [[ true ]]
do
    if [[ ! -e pid.file ]]
    then
        echo $$ > pid.file
    else
        if [[ kill -0 `cat pid.file`]]
        then
            # owner process exists
            sleep 30
        else
            # process gone, take ownership
            echo $$ > pid.file
            # ### DO SOMETHING IMPORTANT HERE ###
            rm -f pid.file
            break
        fi
    fi
done

I like the second one better. It's still far from perfect (lots of race conditions), but it might work if there aren't too many processes fighting for the lock. Also, the sleep 30 should include some randomness in it, if possible (the length of the sleep should have a random component).

But see here, it looks like you can use flock with some versions of the shell. This would be similar to what i do in perl, and it would be safer than the althernatives i can think of.

Community
  • 1
  • 1
theglauber
  • 28,367
  • 7
  • 29
  • 47
  • Thanks for the answer, the problem that i encounter is related to the processes waiting to get the lock, not the one that is executing the code, hence they hang in the first line of your example. Therefore when the script dies they continue getting the lock and there is no more script allowing them to release their lock... – Debugger Feb 28 '12 at 21:05
  • Meanwhile, i would be interested to know which tool you are using in Perl to lock the files. the OS automatic clean-up would be the best solution. But in the same time it should be NFS resistant... – Debugger Feb 28 '12 at 21:07
  • OK, i added some more details. I'm afraid i'm not the world's best Bash programmer, so use with caution. I don't think you can completely avoid race conditions using lock files. – theglauber Feb 28 '12 at 21:27
  • Added another attempt, which doesn't use lockfile. – theglauber Feb 29 '12 at 15:35