1

I've been using the fantastic package texreg to produce high-quality HTML tables from lme4 models. Unfortunately, by default, texreg creates confidence intervals, rather than standard errors, under the coefficients for models from lme4 (see page 17 of the JSS paper).

As an example:

library(lme4)
library(texreg)
screenreg(lmer(Reaction ~ Days + (Days|Subject), sleepstudy))

produces

Computing profile confidence intervals ...
Computing confidence intervals at a confidence level of 0.95. Use argument "method = 'boot'" for bootstrapped CIs.

===============================================
                               Model 1         
-----------------------------------------------
(Intercept)                     251.41 *       
                               [237.68; 265.13]
Days                             10.47 *       
                               [  7.36;  13.58]
-----------------------------------------------
AIC                            1755.63         
BIC                            1774.79         
Log Likelihood                 -871.81         
Deviance                       1743.63         
Num. obs.                       180            
Num. groups: Subject             18            
Variance: Subject.(Intercept)   612.09         
Variance: Subject.Days           35.07         
Variance: Residual              654.94         
===============================================
* 0 outside the confidence interval

And I would prefer to see something like this:

Computing profile confidence intervals ...
Computing confidence intervals at a confidence level of 0.95. Use argument "method = 'boot'" for bootstrapped CIs.

===============================================
                               Model 1         
-----------------------------------------------
(Intercept)                     251.41 *       
                                (24.74)
Days                             10.47 *       
                                 (5.92)
-----------------------------------------------
[output truncated for clarity]

Is there a way to over-ride this behavior? Using the ci.force = FALSE option doesn't work, as far as I can tell.

I'm sticking with texreg, rather than one of the other packages like stargazer, because texreg allows me to group coefficients into meaningful groups.

Thanks in advance for your help!

(UPDATE: edited to include an example)

Community
  • 1
  • 1
Jake Fisher
  • 3,220
  • 3
  • 26
  • 39
  • 1
    Please consider including a *small* [reproducible example](http://stackoverflow.com/questions/5963269/how-to-make-a-great-r-reproducible-example) so we can better understand and more easily answer your question. – Ben Bolker Jul 18 '14 at 19:52
  • Thanks! I put one in. Hopefully that helps to clarify. If not please let me know. Thanks again for your help! – Jake Fisher Jul 18 '14 at 21:20

2 Answers2

2

Using naive=TRUE gets close to what you want ...

library(lme4); library(texreg)
fm1 <- lmer(Reaction ~ Days + (Days|Subject), sleepstudy)
screenreg(fm1,naive=TRUE)

## ==========================================
##                                Model 1    
## ------------------------------------------
## (Intercept)                     251.41 ***
##                                  (6.82)   
## Days                             10.47 ***
##                                  (1.55)   
## ------------------------------------------
## [etc.]

I don't know where you got your values of 24.94, 5.92 from ... ?

sqrt(diag(vcov(fm1)))
## [1] 6.824556 1.545789

cc <- confint(fm1,which="beta_")
apply(cc,1,diff)/3.84
## (Intercept)        Days 
##    7.14813     1.61908

The implied standard errors based on scaling the profile confidence intervals are a little bit wider, but not hugely different.

What I don't know how to do easily is to get significance tests/stars based on profile confidence intervals while still getting standard errors in the table. According to the ci.test entry in ?texreg,

  • when CIs are printed, texreg prints a single star if the confidence intervals don't include zero
  • when SEs are printed it prints the standard number of stars based on the size of the p-value
Ben Bolker
  • 211,554
  • 25
  • 370
  • 453
  • That did the trick, thanks! I got the SEs from `summary(fm1)` but clearly that gave me something other than what I thought. The star behavior using the naive = T option was what I wanted; I wanted stars based on the size of the p-value, not based on whether the CI included 0. – Jake Fisher Jul 20 '14 at 20:36
  • FWIW you copied the standard deviations of the random effects from the summary, not the standard errors of the fixed-effect parameters ... – Ben Bolker Jul 20 '14 at 20:51
2

You can also try setting the 'include.ci' parameter to FALSE

model <- lmer(Reaction ~ Days + (Days|Subject), sleepstudy)
texreg(model, include.ci = FALSE)