13

I am using python 2.7

I want to create a wrapper function around fcntl.flock() that will timeout after a set interval:

wrapper_function(timeout):

I've tried calling on another thread and using thread.join(timeout) but it seems that fcntl.flock() continues blocking:

def GetLock(self, timeout):
    """Returns true if lock is aquired, false if lock is already in use"""
    self.__lock_file = open('proc_lock', 'w')

    def GetLockOrTimeOut():
        print 'ProcessLock: Acquiring Lock'            
        fcntl.flock(self.__lock_file.fileno(), fcntl.LOCK_EX)
        print 'ProcessLock: Lock Acquired'

    thread = threading.Thread(target=GetLockOrTimeOut)
    thread.start()
    thread.join(timeout)

    if thread.isAlive():
        print 'GetLock timed out'
        return False
    else:
        return True

I've looked into solutions for terminating threads, the most popular solution seems to be sub-classing threading.thread and adding a feature to raise an exception in the thread. However, I came across a link that says this method will not work with native calls, which I am pretty sure fcntl.flock() is calling a native function. Suggestions?

Context: I am using a file-lock to create a single instance application but I don't want a second instance of the application to sit around and hang until the first instance terminates.

J Cooper
  • 4,828
  • 3
  • 36
  • 39

5 Answers5

26

Timeouts for system calls are done with signals. Most blocking system calls return with EINTR when a signal happens, so you can use alarm to implement timeouts.

Here's a context manager that works with most system calls, causing IOError to be raised from a blocking system call if it takes too long.

import signal, errno
from contextlib import contextmanager
import fcntl

@contextmanager
def timeout(seconds):
    def timeout_handler(signum, frame):
        pass

    original_handler = signal.signal(signal.SIGALRM, timeout_handler)

    try:
        signal.alarm(seconds)
        yield
    finally:
        signal.alarm(0)
        signal.signal(signal.SIGALRM, original_handler)

with timeout(1):
    f = open("test.lck", "w")
    try:
        fcntl.flock(f.fileno(), fcntl.LOCK_EX)
    except IOError, e:
        if e.errno != errno.EINTR:
            raise e
        print "Lock timed out"
Glenn Maynard
  • 55,829
  • 10
  • 121
  • 131
  • 1
    +1, this is exactly the right way to do it. This is also how the shell utility [`flock(1)`](http://linux.die.net/man/1/flock) works (source code available a [ftp://ftp.kernel.org/pub/linux/utils/util-linux-ng/](ftp://ftp.kernel.org/pub/linux/utils/util-linux-ng/)) – Adam Rosenfield Mar 10 '11 at 04:37
  • agreed, this is a better way. – jcomeau_ictx Mar 10 '11 at 05:07
  • 2
    Is there a way to pull this off when the fcntl.flock is not necessarily being invoked by the main thread? – UsAaR33 Jan 26 '12 at 03:34
  • 2
    Note that this no longer works after [PEP-475](https://www.python.org/dev/peps/pep-0475/) (PY3.5+): `flock` is now automatically retried if it fails because of a signal. The fix is to raise an exception from `timeout_handler`, for example `raise InterruptedException`. It will propagate from the interrupted call (provided it's the main thread) and you can catch it there. – remram Jul 14 '19 at 02:29
10

I'm sure there are several ways, but how about using a non-blocking lock? After some n attempts, give up and exit?

To use non-blocking lock, include the fcntl.LOCK_NB flag, as in:

fcntl.flock(self.__lock_file.fileno(), fcntl.LOCK_EX | fcntl.LOCK_NB)
congusbongus
  • 13,359
  • 7
  • 71
  • 99
jcomeau_ictx
  • 37,688
  • 6
  • 92
  • 107
  • that would be perfect, but where is this non-blocking lock you speak of? (I am fairly new to python) please elaborate – J Cooper Mar 10 '11 at 03:54
  • 1
    from *pydoc fcntl*: When operation is LOCK_SH or LOCK_EX, it can also be bitwise ORed with LOCK_NB to avoid blocking on lock acquisition – jcomeau_ictx Mar 10 '11 at 03:55
  • it's not really a Python thing, I've been using it for as many as 22 years with C. – jcomeau_ictx Mar 10 '11 at 03:58
  • 2
    That's not a great idea. Blocking locks let the kernel be a lot smarter: it can wake up your process immediately when the lock is available to avoid needless delays, avoids network spam on network filesystems, etc. – Glenn Maynard Mar 10 '11 at 04:09
  • 1
    @Glenn Maynard: it would depend on the situation. – jcomeau_ictx Mar 10 '11 at 04:12
  • I guess when all else fails, read the documentation eh? Works well in the context I am using it. thanks. – J Cooper Mar 10 '11 at 04:16
  • This is what I end up with because signal only work in main thread. Thanks. – Realfun Sep 11 '13 at 00:56
4

For Python 3.5+, Glenn Maynard's solution no longer works because of PEP-475. This is a modified version:

import signal, errno
from contextlib import contextmanager
import fcntl

@contextmanager
def timeout(seconds):
    def timeout_handler(signum, frame):
        # Now that flock retries automatically when interrupted, we need
        # an exception to stop it
        # This exception will propagate on the main thread, make sure you're calling flock there
        raise InterruptedError

    original_handler = signal.signal(signal.SIGALRM, timeout_handler)

    try:
        signal.alarm(seconds)
        yield
    finally:
        signal.alarm(0)
        signal.signal(signal.SIGALRM, original_handler)

with timeout(1):
    f = open("test.lck", "w")
    try:
        fcntl.flock(f.fileno(), fcntl.LOCK_EX)
    except InterruptedError:
        # Catch the exception raised by the handler
        # If we weren't raising an exception, flock would automatically retry on signals
        print("Lock timed out")
remram
  • 4,805
  • 1
  • 29
  • 42
  • Are there risks of SIGALRM affecting other threads inside python housekeeping or libraries or etc? Without knowing what other threads are doing (or even using SIGALRM for their own purposes), polling with LOCK_NB and tiny sleeps seems safer. Also, there seems to be a chance for race conditions if you really want to know if the lock was successful or not. – Codemeister Jan 09 '22 at 15:19
  • I personally fork a process just to hold the lock: https://pypi.org/project/fslock/ because I didn't find a better solution when multithreading, as you note. Polling might work but will use more CPU and still be slower. – remram Jan 10 '22 at 00:14
  • 1
    I am looking for fairly efficient mechanism where multiple processes can share common hardware (A/D converter) several times a second, so extra processes seem very inefficient at those rates. I used time.sleep(.005) in the wait loop which keeps the CPU use < 1% while waiting short times; Faster response to availability might not even happen because the CPUs already have a moderate load. – Codemeister Jan 10 '22 at 22:01
3

I'm a fan of shelling out to flock here, since attempting to do a blocking lock with a timeout requires changes to global state, which makes it harder to reason about your program, especially if threading is involved.

You could fork off a subprocess and implement the alarm as above, or you could just exec http://man7.org/linux/man-pages/man1/flock.1.html

import subprocess
def flock_with_timeout(fd, timeout, shared=True):
    rc = subprocess.call(['flock', '--shared' if shared else '--exclusive', '--timeout', str(timeout), str(fd)])
    if rc != 0:
        raise Exception('Failed to take lock')

If you have a new enough version of flock you can use -E to specify a different exit code for the command otherwise succeeding, but failed to take the lock after a timeout, so you can know whether the command failed for some other reason instead.

  • Why outsourcing work to external entity, when there is a correct, working solution for the problem? Calling external shell from python script requires extra work from kernel for calling fork/exec pair, what gives you a performance penalty in scope of memory, CPU and I/O usage. Imagine you have a critical section running a dozen times per second on a system under heavy load. Maybe it is not an issue for OP, but it's good to establish a good practices. – ArturFH Nov 30 '16 at 17:29
  • 2
    Shelling out has many advantages; it is less code to maintain and the external program has been thoroughly debugged. And presumably the external program takes care of all the corner cases. Don't reinvent the wheel... – presto8 Oct 20 '17 at 15:20
1

As a complement to @Richard Maw answer above https://stackoverflow.com/a/32580233/17454091 (Don't have enough reputation to post a comment).

In Python 3.2 and newer, for fds to be available in sub-processes one must also provide pass_fds argument.

Complete solution ends up as:

import subprocess
def flock_with_timeout(fd, timeout, shared=True):
    rc = subprocess.call(['flock',
                          '--shared' if shared else '--exclusive',
                          '--timeout', str(timeout),
                          str(fd)],
                         pass_fds=[fd])
    if rc != 0:
        raise Exception('Failed to take lock')
HTE
  • 113
  • 5