2

I think this could be more of a stats question rather than R question, but I have an error Error: step factor reduced below 0.001 without reducing pwrss when trying to fit a nlmer function to data. My data is:https://www.dropbox.com/s/cri5n7lewhc8j02/chweight.RData?dl=0

I'm trying to fit the model so that I can predict the weight of chicks based on time, for chicks on diet 1. I did the following:

   cw1<-subset(ChickWeight, ChickWeight$Diet==1)
m1 <- nlmer(weight~ SSlogis(Time, Asym, xmid, scal) ~ Asym|Chick, cw1, start=c(Asym = 190, xmid = 730, scal = 350))

Could there be other ways to solve this error? I think the error has to do with Asym values but I'm not understanding well what it is doing, so any brief guidance would help.

halo09876
  • 2,725
  • 12
  • 51
  • 71
  • 1
    I have the same problem for a subset of my data. Centering, scaling and log-transforming response and covariates did not help me. But you can try. – Jonas Lindeløv Mar 08 '15 at 12:50

1 Answers1

2

I have been asked to improve my answer, so here is my attempt to do so.

This error is usually tripped because your start values aren't adequately close to the "true" values, so the optimizer fails to find any local improvements in fit by moving away from them. You need to try providing better starting guesses--this can sometimes be accomplished by algebraically solving the equation at a few points, as described in many places such as this article. Other times, you can plot the data and try to make educated guesses as to what the parameters might be, if you have knowledge of what the parameters "do" within the non-linear function (that is, maybe parameter a represents an asymptote, b is a scaler, c is the mean rate of change, etc.). That's hard for me personally because I have no math background, but I'm often able to make a reasonable guess most of the time.

To answer the question more directly, though, here is some reproducible code that should illustrate that the error in question comes from bad starting guesses.

#Create independentand dependent variables, X and Y, and a grouping variable Z.
xs = rep(1:10, times = 10)
ys = 3 + 2*exp(-0.5*xs)
zs = rep(1:10, each=10)
#Put random noise in X.
for (i in 1:100) {
xs[i] = rnorm(1, xs[i], 2)
}
df1 = data.frame(xs, ys, zs) #Assemble data into data frame.

require(lme4) #Turn on our package.
#Define our custom function--in this case, a three-parameter exponential model.
funct1 = funct1 = deriv(~beta0 + beta1*exp(beta2*xs), namevec=c('beta0',
    'beta1', 'beta2'), function.arg=c('xs','beta0', 'beta1','beta2'))
#This will return the exact same error because our starting guesses are way off.
test1 = nlmer(ys ~ funct1(xs, beta0, beta1, beta2) ~ (beta0|zs), data = df1, 
    start=c(beta0=-50,beta1=200,beta2=3))
#Our starting guesses are much better now, and so nlmer is able to converge this time.
test1 = nlmer(ys ~ funct1(xs, beta0, beta1, beta2) ~ (beta0|zs), data = df1, 
    start=c(beta0=3.2,beta1=1.8,beta2=-0.3))
Community
  • 1
  • 1
Bajcz
  • 433
  • 5
  • 20
  • 1
    I think it's a valid answer. One way to get a good set of starting values is to fit it beforehand with something that guesses starting values for you, e.g. `m0 <- nls(weight ~ SSlogis(Time, Asym, xmid, scal), cw1)`. Then the coefficients of `m0` (`coef(m0)`) can be used as starting values. – alexforrence Aug 30 '16 at 19:35
  • This answer turned up in the low quality review queue, presumably because you don't any code. If yo have the motivation, it would be great if you could produce an example where this happens and then show how a better initial guess fixes it. This way, you are far more likely to get more upvotes — and help the questioner learn something new. – lmo Aug 30 '16 at 22:54
  • Hello @Imo. I have tried to improve the quality of my answer. Thanks! – Bajcz Aug 31 '16 at 13:31