0

I have a logger running on a few thousand processes, and they all write to the same file in append-mode. What would be a good way to guarantee that writes are atomic -- that is, each time a process writes to a log its entire contents are written in one block and there's no other process that writes to that file at the same time?

My thought was doing something like:

logger          = getLogger()
global_lockfile = '/tmp/loglock'

def atomic_log(msg):
    while True:
        if os.path.exists(lockfile):
            continue
        with open(lockfile, 'w') as f:
            logger.write(msg)
        os.remove(lockfile)
    

def some_function(request):
    log_atomic("Hello")

What would be an actual way to do the above on a posix system?

David542
  • 104,438
  • 178
  • 489
  • 842
  • 1
    Does [this](https://stackoverflow.com/questions/489861/locking-a-file-in-python) answer your question? – DYZ Apr 13 '21 at 23:35
  • @DYZ that's a nice example: https://github.com/dmfrey/FileLock/blob/master/filelock/filelock.py – David542 Apr 13 '21 at 23:37
  • @DYZ but on a second note: is a lockfile the correct way to do atomic logs? Or is there another approach that's preferred? – David542 Apr 13 '21 at 23:37
  • 2
    Have a look at this - https://stackoverflow.com/a/13232181/1609219. Basically, if python is opening the with with the flag `O_APPEND` you should get atomic writes. – Macattack Apr 13 '21 at 23:56
  • I think the better approach would be to not try to write to the same file, if that's an option. – AKX Apr 14 '21 at 19:08

0 Answers0