0

While fragment X of code is executed, fragment Y should not be executed and when Y is executed X should not be executed. Instead X and Y should be paused if another one is currently executed.

However any number of processes or threads are allowed to concurrently execute code X.

What about executing more than one Y in parallel? It should either also pause until another instance of Y is executed or die with an error "Cannot execute more than one Y in parallel".

Can this be implemented with Unix advisory file locks? I think I should write the number of currently executed X into the file (similar to reference counting). Or maybe I need two files: one for storing the number of concurrent processes executing X and the other for locking the first one? Now let's try to construct the algorithm.

Note that I write in Python.

porton
  • 5,214
  • 11
  • 47
  • 95
  • Note that X in my real problem is a code modifying a DB and Y is the DB recovery procedure, needed if the data gone somehow inconsistent – porton Sep 28 '16 at 05:01
  • If there's a Y request pending, are X requests allowed to proceed, or do they get held up until the Y is processed? Are there other sharing/starvation issues to worry about (too many Y requests preventing X requests, for example)? This is all pretty much standard 'readers and writers' mutual exclusion or locking stuff — basic concurrency theory. It can be done with advisory locking as long as the processes are cooperating. – Jonathan Leffler Sep 28 '16 at 05:12
  • @JonathanLeffler What do you mean by "pending"? If Y is executed, all X must wait. Do I need one or two advisory locks? – porton Sep 28 '16 at 05:13
  • Suppose there are 3 X's running when a Y arrives. The Y has to wait until the current X's are done. Can a newly arrived X hold up Y? Say 1 of the original 3 X's finishes, but a new X comes along; can it run immediately, or must it wait until the Y has had a turn? Can enough Y's arrive to prevent X's from having a fair crack at the resource (the converse of the X's blocking Y's problem)? How serious this is depends in part on the relative frequency of X and Y requests; it also depends in part on how quick X and Y requests are (both absolutely and relative to each other). – Jonathan Leffler Sep 28 '16 at 05:16
  • @JonathanLeffler The only really required rule is that X cannot co-exist with Y (and also it is desirable to prevent two Y running in parallel). "Can a newly arrived X hold up Y?" Yes, it can (but isn't required to do so). I'd prefer that Y runs as soon as possible, but that is not necessary. Y should not arrive often (we start it manually, and only in the case of an error). X is sometimes (more often than Y) run by our customers. X is not quite quick (it may take as I imagine 0.1sec or like this), Y is even more heavy (up to a few seconds, probably) – porton Sep 28 '16 at 05:25
  • Oh, I gave overlooked that flock(2) supports both exclusive and shared locks. This is a solution – porton Sep 28 '16 at 05:44
  • `fcntl()` locking does too: _`F_SETLK` — Set or clear a file segment lock according to the lock description pointed to by the third argument, `arg`, taken as a pointer to a `struct flock`. F_SETLK is used to establish shared (or read) locks (F_RDLCK) or exclusive (or write) locks, (F_WRLCK), as well as remove either type of lock (F_UNLCK). If a shared or exclusive lock cannot be set, `fcntl` returns immediately with EAGAIN._ – Jonathan Leffler Sep 28 '16 at 05:50
  • In my current code I use `flock()` not `fcntl()` – porton Sep 28 '16 at 06:14
  • Have you got a solution? If so, I suggest deleting this question — or adding a self-answer. I won't be offended if you delete it — my comments are primarily counter-questions to make you think about the issues you face (scheduling and resource starvation are issues you'll need to address, probably). But the basic locking mechanisms should support you (using `lockf()` is harder than using `flock()` or `fcntl()` — though not by much). – Jonathan Leffler Sep 28 '16 at 06:26

1 Answers1

0

This can be solved with one lock because there are two kinds of advisory file locks: exclusive and shared. Y gets an exclusive lock and X a shared lock.

Here is the actual code:

class SequentialTask:
    def __init__(self, lock_file):
        self.lock_file = lock_file

    def sequential(self, lock_mode):
        def sequential_decorator(func):
            #@wraps(func) # TODO: doesn't work
            def func_wrapper(*args, **kwargs):
                with open(self.lock_file, 'w') as file:
                    logger = logging.getLogger(__name__)
                    logger.debug("Locking a sequential request...")
                    flock(file, lock_mode)
                    try:
                        return func(*args, **kwargs)
                    finally:
                        logger.debug("Unlocking a sequential request...")
                        # unlink(self.lock_file)  # Don't do: http://stackoverflow.com/q/17708885/856090
                        flock(file, LOCK_UN)
                        # unlink(self.lock_file)  # Don't do: open but unlocked file may be deleted.
            return func_wrapper
        return sequential_decorator

Then

task = SequentialTask(VAR_DIR+"/groups-modify.lock")

and just add @task.sequential(LOCK_SH) to the function implementing X and @task.sequential(LOCK_EX) to the function implementing Y.

porton
  • 5,214
  • 11
  • 47
  • 95