8

Possible Duplicate:
Python: single instance of program

I need to prevent a cron job from running concurrent instances when a job takes longer to complete than the launcher interval. I'm trying to use the flock concept to achieve this, but fcntl module is not behaving the way I expect.

Can anyone tell me why this works to prevent two concurrent instances:

import sys
import time
import fcntl

file_path = '/var/lock/test.py'
file_handle = open(file_path, 'w')

try:
    fcntl.lockf(file_handle, fcntl.LOCK_EX | fcntl.LOCK_NB)
    print 'no other instance is running'
    for i in range(5):
        time.sleep(1)
        print i + 1

except IOError:
    print 'another instance is running exiting now'
    sys.exit(0)

And why this does not work:

import sys
import time
import fcntl

def file_is_locked(file_path):
    file_handle = open(file_path, 'w')
    try:
        fcntl.lockf(file_handle, fcntl.LOCK_EX | fcntl.LOCK_NB)
        return False
    except IOError:
        return True

file_path = '/var/lock/test.py'

if file_is_locked(file_path):
    print 'another instance is running exiting now'
    sys.exit(0)
else:
    print 'no other instance is running'
    for i in range(5):
        time.sleep(1)
        print i + 1
Community
  • 1
  • 1
tponthieux
  • 1,502
  • 5
  • 18
  • 30
  • 1
    Possible dup of http://stackoverflow.com/questions/380870/python-single-instance-of-program. Which also spun of a library called [tendo](http://pypi.python.org/pypi/tendo) to deal with all the annoying cross-platform issues. Of course it doesn't answer the "Why does A work but not B?" question, but it does solve the underlying question "How should I do this?" – abarnert Jan 18 '13 at 20:38

4 Answers4

8

My humble opinion (although I may be totally wrong) is that file_handle is local to the function (in the second case) and therefore, it gets destroyed once the function is done.

The following code seems to work as expected:

#!/usr/bin/env python
#http://stackoverflow.com/questions/14406562/prevent-running-concurrent-instances-of-a-python-script

import sys
import time
import fcntl

file_handle = None

def file_is_locked(file_path):
    global file_handle 
    file_handle= open(file_path, 'w')
    try:
        fcntl.lockf(file_handle, fcntl.LOCK_EX | fcntl.LOCK_NB)
        return False
    except IOError:
        return True

file_path = '/var/lock/test.py'

if file_is_locked(file_path):
    print 'another instance is running exiting now'
    sys.exit(0)
else:
    print 'no other instance is running'
    for i in range(5):
        time.sleep(1)
        print i + 1

Notice that the only thing I did is setting file_handle as global variable (although I copied the whole code to have a working example)

Savir
  • 17,568
  • 15
  • 82
  • 136
  • Good catch. So the problem is that the handle gets garbage collected before the first instance finishes running, thus releasing the lock even though it hasn't completed yet. Seems like a good reason to use and object for this. You could then use the same code to keep things from running concurrently across any number of loops, threads, or processes. Then again, you could just try http://stackoverflow.com/questions/2798727/named-semaphores-in-python – Silas Ray Jan 18 '13 at 20:52
0

As I mentioned in my commen on @BorrajaX's answer, since it looks like you are POSIX-bound anyway, you could try using a native named semaphore.

Community
  • 1
  • 1
Silas Ray
  • 25,682
  • 5
  • 48
  • 63
0

You could use the setlock program from D. J. Bernstein's daemontools instead:

http://cr.yp.to/daemontools/setlock.html

Remy Blank
  • 4,236
  • 2
  • 23
  • 24
-1

Easiest way would be to create a file /tmp/scriptlock at the start of the script and check if that file exists before doing work. Make sure the lock file is removed though at the end of processing.

Sander Cox
  • 38
  • 4