[I am having an extremely difficult time to implement a thread-/process-safe solution to acquire a file lock using python 3 on Linux (I do not care about portable solutions as the program I am working on makes extensive use of Linux-kernel-exclusive-containerization technologies).]
After reading http://apenwarr.ca/log/?m=201012#13 , I decided to use fcntl.lockf()
to lock a file for process-exclusive access and wrote the following function:
import contextlib as Contextlib
import errno as Errno
import fcntl as Fcntl
import os as Os
@Contextlib.contextmanager
def exclusiveOpen(filename,
mode):
try:
fileDescriptor = Os.open(filename,
Os.O_WRONLY | Os.O_CREAT)
except OSError as e:
if not e.errno == Errno.EEXIST:
raise
try:
Fcntl.lockf(fileDescriptor,
Fcntl.LOCK_EX)
fileObject = Os.fdopen(fileDescriptor,
mode)
try:
yield fileObject
finally:
fileObject.flush()
Os.fdatasync(fileDescriptor)
finally:
Os.close(fileDescriptor)
Apart from that I am certain, that it is incorrect (why doesn't it block in Fcntl.lockf(fileDescriptor, Fcntl.LOCK_EX)
?), the part which makes me feel uneasy the most, is where the fileDescriptor
is acquired - if the file is non existent, it is created ... but what is going on, if two processes execute this part simultaneously? Isn't there a chance of a race condition, where both threads attempt to create the file? And if so, how could one possibly prevent that - certainly not with another lock file (?) because it would have to be created in the same manner (?!?!) I'm lost. Any help is greatly appreciated.
UPDATE: Posted another approach to the underlying problem. The problem I see in this approach, is that a procedure name must not equal the name of an existent UNIX domain socket (possibly created by another program) - am I correct with this?