What are p[0], p[1], p[2]
?
The scipy.optimize
functions typically return an array of parameters p
. For example, given a linear equation:

p
includes the intercept and successive coefficients (or weights) of a linear equation:

Thus in the latter example, p[0]
and p[1]
pertain to an intercept and slope of a line respectively. Of course more parameters (...
) can be optimized as well for higher order polynomials. The OP uses an exponential function, where the parameters can be rewritten as follows:
def fitfuncvx(p, x):
b0, b1, b2 = p
return b2 + b0*np.exp(-x/b1)
We see the parameters in p
are explicitly unpacked into separate weights b0, b1, b2
, which directly correspond with p[0], p[1], p[2]
respectively.
Details: How optimizers work?
The first returned value of the scipy.optimize.leastsq
function is an array of optimized fitted parameters, started from your initial guess, that is computed by iteratively minimizing residuals. The residual is the distance between the predicted response (or y
-hat value) and the true response (y
). The second returned value is a covariance matrix, from which one can estimate the error in the calculation.
For reference, I include the first three arguments of the leastsq
signature:
scipy.optimize.leastsq(func, x0, args=(), ...)
func
is the objective function you wish to optimize
x0
is the initial guessed parameter
args
are additional variables required by the objective function (if any)