I'm trying to use scipy.optimize
functions to find a global minimum of a complicated function with several arguments. scipy.optimize.minimize
seems to do the job best of all, namely, the 'Nelder-Mead' method. However, it tends to go to the areas out of arguments' domain (to assign negative values to arguments that can only be positive) and thus returns an error in such cases. Is there a way to restrict the arguments' bounds within the scipy.optimize.minimize
function itself? Or maybe within other scipy.optimize
functions?
I've found the following advice:
When the parameters fall out of the admissible range, return a wildly huge number (far from the data to be fitted). This will (hopefully) penalize this choice of parameters so much that
curve_fit
will settle on some other admissible set of parameters as optimal.
given in this previous answer, but the procedure will take a lot of computational time in my case.