As the title says, I am using the Differential Evolution algorithm as implemented in the Python mystic package for a global optimisation problem for O(10) parameters, with bounds and constraints.
I am using the simple interface diffev
result = my.diffev(func, x0, npop = 10*len(list(bnds)), bounds = bnds,
ftol = 1e-11, gtol = gtol, maxiter = 1024**3, maxfun = 1024**3,
constraints = constraint_eq, penalty = penalty,
full_output=True, itermon=mon, scale = scale)
I was experimenting running the SAME optimisation over several times: given a scaling for the differential evolution algorithm, I run 10 times the optimisation problem.
Result? I get different answers for almost all the results!
I experiment with scaling of 0.7, 0.75, 0.8, and 0.85, all roughly same bad behaviour (as suggested on the mystic page).
Here there is an example: on the x-axis there are the parameters, on the y-axis their values. The labels represent the iteration. Ideally you want to see only one line.
I run with gtol = 3500, so it should be quite long. I am using npop = 10*number pars, ftol = 1e-11, and the other important arguments for the diffev algorithm are the default ones.
Does anyone have some suggestion for tuning the differential evolution with mystic? Is there a way to avoid this variance in the results? I know it is a stochastic algorithm, but I did not expect it to give different results while running on gtol of 3500. My understanding was also that this algorithm does not get stuck into local minima, but I might be wrong.
p.s.
This is not relevant for the question, but just to give some context of why this is important for me.
What I need to do for my work is to minimise a function, under the conditions above, for several input data: I optimize for each data configuration over the O(10) parameters, then the configuration with some parameters that gives the overall minimum is the 'chosen' one.
Now, if the optimiser is not stable, it might give me the wrong data configuration by chance as the optimal one, as I run over hundreds of them.