I am creating a module for spawning processes using Python's multiprocessing. Part of the module is focused on creating worker processes and returning results from those processes to a result queue. The logging for the module is done using a TCP SocketHandler and StreamHandler.
The problem is that logging works sometimes but not all the time. The actual multiprocess module works as expected, but the output logs are inconsistently shown in the logging console. Sometimes all or most of the logs are shown. However, logs can periodically or eventually disappear until only a fraction of the logs are shown. Furthermore, running separate modules logs as expected, showing all logs.
The code snippet that represents my current testing is below:
#Check process list for dead processes
def clean_process_list(self):
logger.info("Checking for dead or orphaned processes.")
if len(self.processes):
for i in range(0, len(self.processes)):
process = self.processes.pop()
if process.is_alive() is not True:
logger.info("{} is unresponsive; terminating process.".format(process.name))
process.terminate()
else:
self.processes.appendleft(process)
#Worker method which puts results from a target method into a queue, if any exist. Self-terminates on completion.
def worker(self, args):
worker_name = multiprocessing.current_process().name
logging_socket = tcp_log_socket.local_logging_socket('test.json', worker_name) #reinitializing logger in worker for troubleshooting
logger = logging_socket.logger
logger.info("Running process {}; waiting for results.".format(worker_name))
results = self.target_method(*args)
results_queue = self.process_results
logger.info("Ran target method, storing results and name of finished process.")
results_queue.put([results, worker_name])
logger.info("Process {} completed, exiting.".format(worker_name))
sys.exit(0)
#Creates and uses a process to run a job using the assigned target method.
def use_process(self, args):
self.clean_process_list()
self.process_results = multiprocessing.Queue()
process = multiprocessing.Process(target=self.worker, args=(args,))
logger.info("Created process; process name is {}".format(process.name))
self.processes.append(process)
process.start()
logger.info("Process {} started.".format(process.name))
The expected log output vs an unexpected output is below:
Ideal Output:
Checking for dead or orphaned processes.
Process-1 is unresponsive; terminating process.
Created process; process name is Process-2
Process-2 started.
Running process Process-2; waiting for results.
Ran target method, storing results and name of finished process.
Sum is: 5
Process Process-2 completed, exiting.
Unexpected Output
Checking for dead or orphaned processes.
Created process; process name is Process-2
Running process Process-2; waiting for results.
Sum is: 5
Or even worse:
Sum is: 5
With the above, I have already done basic configuration on my logger (ie: setting the log level) and my logging server is configured to retrieve full records if an incomplete (shorter) message was received.
Does anyone know any symptoms or the root cause of this problem?
EDIT: I should mention that the logging code is almost the same as the code featured in the Python logging cookbook for multiple processes: https://docs.python.org/3/howto/logging-cookbook.html