Using python 3.6, I have a problem like so: (1) there's a joblib.Parallel
loop over embarrassingly parallel jobs (2) the jobs themselves are fairly time intensive c++ native objects that occasionally segfault and whose code I cannot modify.
To guard against the segfaults, I attempted to wrap the jobs themselves inside a multiprocessing Process. Unfortunately, python itself throws an assertion error with daemonic processes are not allowed to have children
with this solution.
So I took the solution posted here and tried inheriting from Process: https://stackoverflow.com/a/8963618/614684
That didn't work either, and so I came up with the following solution which DOES work:
class NoDaemonProcess(multiprocessing.Process):
def __init__(self, group=None, target=None, name=None, args=(), kwargs={},
*, daemon=None):
super(NoDaemonProcess, self).__init__(group, target, name, args, kwargs, daemon=daemon)
if 'daemon' in multiprocessing.process._current_process._config:
del multiprocessing.process._current_process._config['daemon']
self._config = multiprocessing.process._current_process._config.copy()
# make 'daemon' attribute always return False
def _get_daemon(self):
return False
def _set_daemon(self, value):
pass
daemon = property(_get_daemon, _set_daemon)
Basically, I modify the global state of the multiprocessing package to delete the fact that the current process is a daemon.
Is there a better way to do this? I would appreciate any help in making this more robust and reliable.