Q : "How can I parallelize Scipy-shgo optimizer?"
The generations of Scipy developers have made their best to design as much optimisation tricks into the internalities of this FORTRAN
-originated library, one would have to be indeed a superadvanced architect if trying to improve the already ultimately good product. That does not say one cannot do it, yet it warns, one would have to be very good at trying to do that.
What to do with this?
a)
we can always check, if the most expensive parts could get some boosted improvements to run way faster ( here it is the case of the per-loop run callee
-fun - the passed objective_function()
)
If skills, RAM and some smart CPU-( registers + cache-lines friendly )-vectorisation tricks permit, this could help in every case, sometimes a lot.
Tweaking a default value of eps
and other method-specific hyper-parameters might help in smooth-model cases, if still insisting to remain using Sequential Least Square as a solver's driving-method.
b)
we can opt for a less expensive optimiser method, the actually chosen SLSQP
-one is both expensive and (IIRC) cannot use sparse-matrix representations of data (if these get into your use-case). With ~ O(n^2)
[SPACE]-domain scaling and ~ O(n^3)
[TIME]-domain scaling for n
-dimensions, it makes it less practical for optimizing jobs with a scale of more than a few thousands
c)
we may analyse and try, if problem and other conditions permit, to run the global-optimisation in many split-cases, at lower dimensionality of the aProblemParametersVectorSPACE[...]
, for finding sub-space optima and try to augment / re-run the most promising solutions received from the subSpace-s as a full-scale, all-dimension global optimum starter, hopefully in faster time, than to let evolve the same without those many (faster) sub-space hints. Here only our time, resources & imagination is our limit.