I have been asked to improve my answer, so here is my attempt to do so.
This error is usually tripped because your start values aren't adequately close to the "true" values, so the optimizer fails to find any local improvements in fit by moving away from them. You need to try providing better starting guesses--this can sometimes be accomplished by algebraically solving the equation at a few points, as described in many places such as this article. Other times, you can plot the data and try to make educated guesses as to what the parameters might be, if you have knowledge of what the parameters "do" within the non-linear function (that is, maybe parameter a
represents an asymptote, b
is a scaler, c
is the mean rate of change, etc.). That's hard for me personally because I have no math background, but I'm often able to make a reasonable guess most of the time.
To answer the question more directly, though, here is some reproducible code that should illustrate that the error in question comes from bad starting guesses.
#Create independentand dependent variables, X and Y, and a grouping variable Z.
xs = rep(1:10, times = 10)
ys = 3 + 2*exp(-0.5*xs)
zs = rep(1:10, each=10)
#Put random noise in X.
for (i in 1:100) {
xs[i] = rnorm(1, xs[i], 2)
}
df1 = data.frame(xs, ys, zs) #Assemble data into data frame.
require(lme4) #Turn on our package.
#Define our custom function--in this case, a three-parameter exponential model.
funct1 = funct1 = deriv(~beta0 + beta1*exp(beta2*xs), namevec=c('beta0',
'beta1', 'beta2'), function.arg=c('xs','beta0', 'beta1','beta2'))
#This will return the exact same error because our starting guesses are way off.
test1 = nlmer(ys ~ funct1(xs, beta0, beta1, beta2) ~ (beta0|zs), data = df1,
start=c(beta0=-50,beta1=200,beta2=3))
#Our starting guesses are much better now, and so nlmer is able to converge this time.
test1 = nlmer(ys ~ funct1(xs, beta0, beta1, beta2) ~ (beta0|zs), data = df1,
start=c(beta0=3.2,beta1=1.8,beta2=-0.3))