The logic of my multiprocessing program that tries to handle exceptions in processes is pretty much like the following:
import multiprocessing
class CriticalError(Exception):
def __init__(self, error_message):
print error_message
q.put("exit")
def foo_process():
while True:
try:
line = open("a_file_that_does_not_exist").readline()
except IOError:
raise CriticalError("IOError")
try:
text = line.split(',')[1]
print text
except IndexError:
print 'no text'
if __name__ == "__main__":
q = multiprocessing.Queue()
p = multiprocessing.Process(target=foo_process)
p.start()
while True:
if not q.empty():
msg = q.get()
if msg == "exit":
p.terminate()
exit()
If I don't have the try-except around file operation, I get
Traceback (most recent call last):
File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
self._target(*self._args, **self._kwargs)
File "foo.py", line 22, in foo_process
line = open("a_file_that_does_not_exist").readline()
IOError: [Errno 2] No such file or directory: 'a_file_that_does_not_exist'
but the program remains open. Is there a Pythonic way to remove the try-except clause related to IOError, or actually, to have all unhandled exceptions either put the "exit" message into Queue 'q', or terminate the process and exit the program some other way? This would clear my codebase by a huge amount when I wouldn't have to catch errors that in applications without multiprocessing kill the program automatically. It would also allow me to add assertions when AssertionError would also exit the program. Whatever the solution, I'd like to be able to see the traceback -- my current solution doesn't provide it.