I'm trying to implement logging in multiprocessing server. According to the documentation, "logging to a single file from multiple processes is not supported". I've created a small program to check this statement:
import logging
import multiprocessing
import os
log = logging.getLogger()
def setup_logger():
formatter = logging.Formatter('%(asctime)s %(name)s %(levelname)s: %(message)s')
fileHandler = logging.FileHandler('test.log')
fileHandler.setFormatter(formatter)
log.setLevel(logging.DEBUG)
log.addHandler(fileHandler)
def write_log_entries(identifier, start_event):
start_event.wait()
for i in range(100):
s = ''.join(str(identifier) for k in range(30))
log.info('[{}, {}] --- {}'.format(os.getpid(), identifier, s))
if __name__ == '__main__':
setup_logger()
procs = []
start_event = multiprocessing.Event()
for i in range(100, 300):
p = multiprocessing.Process(target=write_log_entries, args=(i, start_event))
procs.append(p)
for p in procs:
p.start()
start_event.set()
for p in procs:
p.join()
After executing the above code I expected to see a complete mess in "test.log", but everything seems to be fine (except timestamps, of course, which are not in sequence).
Can anybody explain why don't log entries overlap when the log file is being written by multiple processes simultaneously? Can log.info() be considered atomic in this case?