I am running a python script which uses scipy.optimize.differential_evolution
to find optimum parameters for given data samples. I am processing my samples sequentially. The script is running fine, although when I wanted to make use of the parallel computing options implemented in the package, by calling it via:
res = scipy.optimize.differential_evolution( min_fun,
bounds = bnds,
updating = 'deferred',
workers = 120
)
after evaluating res
for a few times, it throws an error
File "[...]/miniconda3/envs/remote_fit_environment/lib/python3.8/multiprocessing/popen_fork.py", line 69, in _launch
child_r, parent_w = os.pipe()
OSError: [Errno 24] Too many open files
If I allocate less CPUs, e.g. workers=20
it takes longer, as in more times calling differential_evolution()
, until the error occurs.
I am aware that I could raise the limit for open files (1024 by default on that machine), but this seems strange to me. Since the Error origins in the multiprocessing
module and is traced back to differential_evolution
, I am wondering whether something with the parallelisation implementation within scipy.optimize.differential_evolution
might be wrong or "not clean" (although it is much more likely that I am missing something as I am completely new to this whole parallel/multiprocessing thing)?
Any similar experiences or ideas how to solve this?