21

Does Python's logging library provide serialised logging for two (or more) separate python processes logging to the same file? It doesn't seem clear from the docs (which I have read).

If so, what about on completely different machines (where the shared log file would exist on an NFS export accessible by both).

sjbx
  • 1,205
  • 2
  • 12
  • 12

3 Answers3

25

No it is not supported. From python logging cookbook:

Although logging is thread-safe, and logging to a single file from multiple threads in a single process is supported, logging to a single file from multiple processes is not supported, because there is no standard way to serialize access to a single file across multiple processes in Python.

Afterwards the cookbook suggests to use a single socket-server process that handles the logs and the other processes sending log messages to it. There is a working example of this apporach in the section Sending and Receiving logging events across a network.

Bakuriu
  • 98,325
  • 22
  • 197
  • 231
Hayden Crocker
  • 509
  • 4
  • 15
  • 1
    Personally I would be tempted to do exactly what they suggest and implement a very small 'log server' using socket servers and just log to that socket as they suggest in the cookbook. – Hayden Crocker Feb 26 '13 at 18:09
  • 1
    Avoid to answer with a short sentence and a link but try to include the content that actually answers the question. This makes easier to search for information in SO and also keep in mind that links can break in the future. If you want to add information to your answer you should edit it instead of commenting. – Bakuriu Feb 26 '13 at 21:15
  • Thank you for the answer. I suspected this was the case. I guess this would be the same if I had many syslog daemons logging to a single file? Sadly, the requirement is for the logs to exist on an NFS share (accessible from many different machines running the same code). Since our requirements dictate that we cannot impose the contraint that the process need to be able to communicate (they may be on separate networks) and we cannot modify the NFS share (since it needs to work out of the box with any current NFS export. It looks like I'll have to settle for 1 log per process on the NFS share. – sjbx Feb 27 '13 at 09:16
  • 1
    Sadly, given that any daemon will be a unique process the same rules would apply. Essentially access to that single file must be controlled from a single point. Potentially you could find a nice way of merging these separate log files based on a naming convention? Have a look at this: http://stackoverflow.com/questions/6653371/merging-and-sorting-log-files-in-python – Hayden Crocker Feb 27 '13 at 19:04
  • with such luck, could be used rsyslog or logstash as server part :) – Reishin Jan 16 '17 at 04:53
  • this is basically bullshit, cause all posix systems has atomic appends to file. On ubuntu as i rember default atomic size is 4kb, so if you have small logs you can freely write from many processes to one file in append mode. – Andrey Nikishaev Aug 07 '18 at 10:26
3

One grotty solution to this problem is to create a logging process which listens on a socket, on a single thread, which just outputs whatever it receives

The point is to hijack the socket queue as an arbitration mechanism.

#! /usr/bin/env python

import sys
import socket
import argparse

p = argparse.ArgumentParser()
p.add_argument("-p", "--port", help="which port to listen on", type=int)
p.add_argument("-b", "--backlog", help="accept backlog size", type=int)
p.add_argument("-s", "--buffersize", help="recv buffer size", type=int)
args = p.parse_args()

port = args.port if args.port else 1339
backlog = args.backlog if args.backlog else 5
size = args.buffersize if args.buffersize else 1024
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.bind(('', port))
s.listen(backlog)
print "Listening on port ", port, 'backlog size', backlog, 'buffer size', size, '\n'
while 1:
    try:
        (client, address) = s.accept()
        data = client.recv(size)
        print data
    except:
        client.close()

And to test it:

#! /usr/bin/env python

import sys
import socket
import argparse

p = argparse.ArgumentParser()
p.add_argument("-p", "--port", help="send port", action='store', default=1339, type=int)
p.add_argument("text", help="text to send")
args = p.parse_args()

if not args.quit and not args.text:
    p.print_help()
else:
    try:
        s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
        s.connect(('', args.port))
        s.send(args.text)
    except:
        s.close()

Then use it like this:

stdbuf -o L ./logger.py -b 10 -s 4096 >>logger.log 2>&1 &

and monitor recent activity with:

tail -f logger.log

Each logging entry from any given process will be emitted atomically. Adding this into the standard logging system shouldn't be too hard. Using sockets means that multiple machines can also target a single log, hosted on a dedicated machine.

0

The simplest way is to using custom handler for logging which will pass all logs with queue from child process to main, and there log it. in such way for example working logging on client application where u have main UI thread and worker threads.

Also on POSIX system, u can use logging in append mode. Up to 4kb it would be atomic.

Andrey Nikishaev
  • 3,759
  • 5
  • 40
  • 55