0

I am trying to build a negative binomial regression model in R by using the optim() function over the Log-Liklihood function to estimate the parameters. The Log-Lilihood function is: Negative Binomial Log-Liklihood Function

Here, alpha is the dispersion parameter and beta is the vector of regression coefficients. I am using the results from glm.nb() function for initial values to the optim() function and also to validate my results.

When I pass both alpha and beta as parameters to be estimated in the optim() function and try to calculate the Standard Errors, it throws negative variances. On the contrary when I pass alpha as a constant and only beta is to be estimated then my results are exactly similar to the ones from glm.nb() and stadard errors are also satisfactory.

I curious about the reason behind this. Is it necesary to estimate the dispersion parameter first and the estimate the regression coefficients keeping alpha fixed?

Results from glm.nb() function: glm.nb()

The dispersion parameter alpha is respresented by 'theta'. Results from optim() function: optim(), alpha and beta variable Variances from the above method: Variances

Results from optim() function with alpha fixed: optim(), alpha fixed

  • This is not really a specific programming question that's appropriate for Stack Overflow. If you need advice about fitting statistical models, you should ask for help at [stats.se] instead. You are likely to get better help there. – MrFlick Mar 15 '23 at 13:55

0 Answers0