26

On a Debian-based OS (Ubuntu, Debian Squeeze), I'm using Python (2.7, 3.2) fcntl to lock a file. As I understand from what I read, fnctl.flock locks a file in a way, that an exception will be thrown if another client wants to lock the same file.

I built a little example, which I would expect to throw an excepiton, since I first lock the file, and then, immediately after, I try to lock it again:

#!/usr/bin/env python
# -*- coding: utf-8 -*-
import fcntl
fcntl.flock(open('/tmp/locktest', 'r'), fcntl.LOCK_EX)
try:
    fcntl.flock(open('/tmp/locktest', 'r'), fcntl.LOCK_EX | fcntl.LOCK_NB)
except IOError:
    print("can't immediately write-lock the file ($!), blocking ...")
else:
    print("No error")

But the example just prints "No error".

If I split this code up to two clients running at the same time (one locking and then waiting, the other trying to lock after the first lock is already active), I get the same behavior - no effect at all.

Whats the explanation for this behavior?

EDIT:

Changes as requested by nightcracker, this version also prints "No error", although I would not expect that:

#!/usr/bin/env python
# -*- coding: utf-8 -*-
import fcntl
import time
fcntl.flock(open('/tmp/locktest', 'w'), fcntl.LOCK_EX | fcntl.LOCK_NB)
try:
    fcntl.flock(open('/tmp/locktest', 'w'), fcntl.LOCK_EX | fcntl.LOCK_NB)
except IOError:
    print("can't immediately write-lock the file ($!), blocking ...")
else:
    print("No error")
Wolkenarchitekt
  • 20,170
  • 29
  • 111
  • 174

7 Answers7

18

Old post, but if anyone else finds it, I get this behaviour:

>>> fcntl.flock(open('test.flock', 'w'), fcntl.LOCK_EX)
>>> fcntl.flock(open('test.flock', 'w'), fcntl.LOCK_EX | fcntl.LOCK_NB)
# That didn't throw an exception

>>> f = open('test.flock', 'w')
>>> fcntl.flock(f, fcntl.LOCK_EX)
>>> fcntl.flock(open('test.flock', 'w'), fcntl.LOCK_EX | fcntl.LOCK_NB)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
IOError: [Errno 35] Resource temporarily unavailable
>>> f.close()
>>> fcntl.flock(open('test.flock', 'w'), fcntl.LOCK_EX | fcntl.LOCK_NB)
# No exception

It looks like in the first case, the file is closed after the first line, presumably because the file object is inaccessible. Closing the file releases the lock.

philh
  • 636
  • 5
  • 19
  • 2
    Please note that since python 3.3 operations in fcntl module raise OSError instead of IOError – Qlimax Jul 31 '20 at 12:33
16

I hade the same problem... I've solved it holding the opened file in a separate variable:

Won't work:

fcntl.lockf(open('/tmp/locktest', 'w'), fcntl.LOCK_EX | fcntl.LOCK_NB)

Works:

lockfile = open('/tmp/locktest', 'w')
fcntl.lockf(lockfile, fcntl.LOCK_EX | fcntl.LOCK_NB)

I think that the first doesnt' works because the opened file is garbage collected, closed and the lock released.

hgdeoro
  • 1,030
  • 10
  • 7
  • 6
    Please note that you are using `lockf` whereas OP has used `flock` in the original post. These two are very different implementations! Bad names, difficult to catch ;) – Jatin Kumar Feb 12 '15 at 04:01
12

Got it. The error in my script is that I create a new file descriptor on each call:

fcntl.flock(open('/tmp/locktest', 'r'), fcntl.LOCK_EX | fcntl.LOCK_NB)
(...)
fcntl.flock(open('/tmp/locktest', 'r'), fcntl.LOCK_EX | fcntl.LOCK_NB)

Instead, I have to assign the file object to a variable and than try to lock:

f = open('/tmp/locktest', 'r')
fcntl.flock(f, fcntl.LOCK_EX | fcntl.LOCK_NB)
(...)
fcntl.flock(f, fcntl.LOCK_EX | fcntl.LOCK_NB)

Than I'm also getting the exception I wanted to see: IOError: [Errno 11] Resource temporarily unavailable. Now I have to think about in which cases it makes sense at all to use fcntl.

Wolkenarchitekt
  • 20,170
  • 29
  • 111
  • 174
  • 26
    To be clear, the error isn't that you're creating a *new* file descriptor on each call, but that the *previous* file descriptor has been garbage collected (and the previous lock goes with it). If you were to save both of those file descriptors to different variables, the script would work. – Dustin Boswell Aug 29 '13 at 00:18
  • 2
    This answer is misleading. The comment above by @Dustin is correct. – Sam Watkins Jan 21 '16 at 05:51
6

There are two catches. According to the documentation:

  1. When operation is LOCK_SH or LOCK_EX, it can also be bitwise ORed with LOCK_NB to avoid blocking on lock acquisition. If LOCK_NB is used and the lock cannot be acquired, an IOError will be raised and the exception will have an errno attribute set to EACCES or EAGAIN (depending on the operating system; for portability, check for both values).

    You forgot to set LOCK_NB.

  2. On at least some systems, LOCK_EX can only be used if the file descriptor refers to a file opened for writing.

    You have a file opened for reading, which might not support LOCK_EX on your system.

Community
  • 1
  • 1
orlp
  • 112,504
  • 36
  • 218
  • 315
3

you could refer to this post for more details of different lockin schemes.
As for your second question, use fcntl to get lock across different process(use lockf instead for simplicity). On linux lockf is just a wrapper for fcntl, both are associated with (pid, inode) pair.
1. use fcntl.fcntl to provide file lock across processes.

import os
import sys
import time
import fcntl
import struct


fd = open('/etc/mtab', 'r')
ppid = os.getpid()
print('parent pid: %d' % ppid)
lockdata = struct.pack('hhllh', fcntl.F_RDLCK, 0, 0, 0, ppid)
res = fcntl.fcntl(fd.fileno(), fcntl.F_SETLK, lockdata)
print('put read lock in parent process: %s' % str(struct.unpack('hhllh', res)))
if os.fork():
    os.wait()
    lockdata = struct.pack('hhllh', fcntl.F_UNLCK, 0, 0, 0, ppid)
    res = fcntl.fcntl(fd.fileno(), fcntl.F_SETLK, lockdata)
    print('release lock: %s' % str(struct.unpack('hhllh', res)))
else:
    cpid = os.getpid()
    print('child pid: %d' % cpid)
    lockdata = struct.pack('hhllh', fcntl.F_WRLCK, 0, 0, 0, cpid)
    try:
        fcntl.fcntl(fd.fileno(), fcntl.F_SETLK, lockdata)
    except OSError:
        res = fcntl.fcntl(fd.fileno(), fcntl.F_GETLK, lockdata)
        print('fail to get lock: %s' % str(struct.unpack('hhllh', res)))
    else:
        print('succeeded in getting lock')

2. use fcntl.lockf.

import os
import time
import fcntl

fd = open('/etc/mtab', 'w')
fcntl.lockf(fd, fcntl.LOCK_EX | fcntl.LOCK_NB)
if os.fork():
    os.wait()
    fcntl.lockf(fd, fcntl.LOCK_UN)
else:
    try:
        fcntl.lockf(fd, fcntl.LOCK_EX | fcntl.LOCK_NB)
    except IOError as e:
        print('failed to get lock')
    else:
        print('succeeded in getting lock')
lyu.l
  • 282
  • 3
  • 8
0

You need to pass in the file descriptor (obtainable by calling the fileno() method of the file object). The code below throws an IOError when the same code is run in a separate interpreter.

>>> import fcntl
>>> thefile = open('/tmp/testfile')
>>> fd = thefile.fileno()
>>> fcntl.flock(fd, fcntl.LOCK_EX | fcntl.LOCK_NB)
Vatine
  • 20,782
  • 4
  • 54
  • 70
  • This shouldn't be necessary. From the [documentation](http://docs.python.org/library/fcntl.html#fcntl.flock): _Perform the lock operation op on file descriptor fd (__file objects providing a fileno() method are accepted as well__)._ – orlp Mar 28 '12 at 12:54
  • applying this does not change the behavior, still getting "No error", in opposite to what I would expect – Wolkenarchitekt Mar 28 '12 at 12:55
  • 1
    @ifischer: Odd, I pasted the code above into two python interpreters on an Ubuntu machine and the first one completed, the second threw an exception. – Vatine Mar 28 '12 at 13:30
-2

Try:

global f
f = open('/tmp/locktest', 'r')

When the file is closed the lock will vanish.

Matt
  • 13,948
  • 6
  • 44
  • 68
rao xp
  • 1