1

I am trying to optimize 8 parameters for my model using optimizer, but I am struggling with slow speed of the Scipy-shgo optimizer.

opt = shgo( objective_function,                     # per-loop callee ? [ns/call]
            bounds           = bnds,                # static
            iters            = 2,                   # 2
            minimizer_kwargs = { 'method': 'SLSQP', # 0(n^3) in time
                                 'ftol':    1e-3    # FTOL 0.001
                                  }
            )

How can I parallelize Scipy-shgo optimizer?

user3666197
  • 1
  • 6
  • 50
  • 92
  • Probably not. Most of the scipy optimizers use some sort of compiled code (standard libraries), but they call the `objective_funciton` (and bounds) many times - sequentiallly, not in any sort of parallel fashion. Pay attention to the performance of your own functions. – hpaulj May 22 '23 at 03:56

2 Answers2

0

Q : "How can I parallelize Scipy-shgo optimizer?"

The generations of Scipy developers have made their best to design as much optimisation tricks into the internalities of this FORTRAN-originated library, one would have to be indeed a superadvanced architect if trying to improve the already ultimately good product. That does not say one cannot do it, yet it warns, one would have to be very good at trying to do that.

What to do with this?

a)
we can always check, if the most expensive parts could get some boosted improvements to run way faster ( here it is the case of the per-loop run callee-fun - the passed objective_function() )

If skills, RAM and some smart CPU-( registers + cache-lines friendly )-vectorisation tricks permit, this could help in every case, sometimes a lot.

Tweaking a default value of eps and other method-specific hyper-parameters might help in smooth-model cases, if still insisting to remain using Sequential Least Square as a solver's driving-method.

b)
we can opt for a less expensive optimiser method, the actually chosen SLSQP-one is both expensive and (IIRC) cannot use sparse-matrix representations of data (if these get into your use-case). With ~ O(n^2) [SPACE]-domain scaling and ~ O(n^3) [TIME]-domain scaling for n-dimensions, it makes it less practical for optimizing jobs with a scale of more than a few thousands

c)
we may analyse and try, if problem and other conditions permit, to run the global-optimisation in many split-cases, at lower dimensionality of the aProblemParametersVectorSPACE[...], for finding sub-space optima and try to augment / re-run the most promising solutions received from the subSpace-s as a full-scale, all-dimension global optimum starter, hopefully in faster time, than to let evolve the same without those many (faster) sub-space hints. Here only our time, resources & imagination is our limit.

user3666197
  • 1
  • 6
  • 50
  • 92
0

Use the workers keyword for parallelisation.

Andrew Nelson
  • 460
  • 3
  • 11
  • Thanks for the answer! Looks like workers will be included in version 1.11.0, which has not yet been released. Would there be any way to use it before getting released? – user21629075 May 23 '23 at 07:02
  • Yes, if you install the nightly wheel. https://anaconda.org/scipy-wheels-nightly/scipy – Andrew Nelson May 23 '23 at 16:14