I'm using scipy.optimize.minimize to find the minimum of a 4D function that is rather sensitive to the initial guess used. If I vary it a little bit, the solution will change considerably.
There are many questions similar to this one already in SO (e.g.: 1, 2, 3), but no real answer.
In an old question of mine, one of the developers of the zunzun.com site (apparently no longer online) explained how they managed this:
Zunzun.com uses the Differential Evolution genetic algorithm (DE) to find initial parameter estimates which are then passed to the Levenberg-Marquardt solver in scipy. DE is not actually used as a global optimizer per se, but rather as an "initial parameter guesser".
The closest I've found to this algorithm is this answer where a for
block is used to call the minimizing function many times with random initial guesses. This generates multiple minimized solutions, and finally the best (smallest value) one is picked.
Is there something like what the zunzun dev described already implemented in Python?