0

Here's my data:

subject     arm treat   bline   change
'subject1'  'L' N   6.3597  4.9281
'subject1'  'R' T   10.3499 1.8915
'subject3'  'L' N   12.4108 -0.9008
'subject3'  'R' T   13.2422 -0.7357
'subject4'  'L' T   8.7383  2.756
'subject4'  'R' N   10.8257 -0.531
'subject5'  'L' N   7.1766  2.0536
'subject5'  'R' T   8.1369  1.9841
'subject6'  'L' T   10.3978  9.0743
'subject6'  'R' N   11.3184  3.381
'subject8'  'L' T   10.7251  2.9658
'subject8'  'R' N   10.9818  2.9908
'subject9'  'L' T   7.3745   2.9143
'subject9'  'R' N   9.4863  -3.0847
'subject10' 'L' T   11.8132  -2.1629
'subject10' 'R' N   9.5287   0.1401
'subject11' 'L' T   8.2977   6.2219
'subject11' 'R' N   9.3691   0.7408
'subject12' 'L' T   12.6003  -0.7645
'subject12' 'R' N   11.7329  0.0342
'subject13' 'L' N   9.4918  2.0716
'subject13' 'R' T   9.6205  1.5705
'subject14' 'L' T   9.3945  4.6176
'subject14' 'R' N   11.0176 1.445
'subject16' 'L' T   8.0221  1.4751
'subject16' 'R' N   9.8307  -2.3697

When I fit a mixed model with treat and arm as factors:

m <- lmer(change ~ bline + treat + arm + (1|subject), data=change1)
ls_means(m, which = NULL, level=0.95, ddf="Kenward-Roger")

The ls_means statement returns no result. Can anyone help with what is going wrong?

Ben Bolker
  • 211,554
  • 25
  • 370
  • 453
Statbong
  • 3
  • 2

2 Answers2

1

I too see empty results:

> ls_means(m, which = NULL, level=0.95, ddf="Kenward-Roger")
Least Squares Means table:

     Estimate Std. Error df t value lower upper Pr(>|t|)

  Confidence level: 95%
  Degrees of freedom method: Kenward-Roger 

However, the emmeans package works fine. You can use emmeans() or lsmeans() -- the latter just re-labels the emmeans() results. "Estimated marginal means" is a more generally-appropriate term.

> library(emmeans)
> lsmeans(m, "treat")
 treat lsmean   SE df lower.CL upper.CL
 N      0.996 0.72 15   -0.539     2.53
 T      2.290 0.72 15    0.755     3.82

Results are averaged over the levels of: arm 
Degrees-of-freedom method: kenward-roger 
Confidence level used: 0.95 

> lsmeans(m, "arm")
 arm lsmean    SE   df lower.CL upper.CL
 L     1.97 0.737 15.6    0.403     3.53
 R     1.32 0.737 15.6   -0.248     2.88

Results are averaged over the levels of: treat 
Degrees-of-freedom method: kenward-roger 
Confidence level used: 0.95 

I suspect that lmerTest::ls_means() does not support predictors of class "character". If you change treat and arm to factors, it may work.

Russ Lenth
  • 5,922
  • 2
  • 13
  • 21
  • I confirmed by guess. `change1 = transform(change1, treat=factor(treat), arm=factor(arm)); m <- lmer(change ~ bline + treat + arm + (1|subject), data=change1)`. Then the `ls_means()` call you have works. You should reoport this to the package maintainer – Russ Lenth Apr 27 '21 at 01:33
  • Thanks for figuring this out! I will report this finding. – Statbong Apr 27 '21 at 02:48
  • I also have a similar problem, and after transformation of predictors to factor, it works. I suggest the simply ``change1$arm <- as.factor(change1$arm)`` – Alessio Aug 29 '23 at 10:40
0

We're going to need more information. Here's a reproducible example that seems just fine:

set.seed(101)
library(lme4)
library(lmerTest)
dd <- expand.grid(subject=factor(1:40), arm=c("L","R"))
## replicate N/T in random order for each subject
dd$treat <- c(replicate(40,sample(c("N","T"))))
dd$bline <- rnorm(nrow(dd))
dd$change <- simulate(~bline+treat+arm+(1|subject),
                      newdata=dd,
                      newparams=list(beta=rep(1,4),
                                     theta=1,
                                     sigma=1))[[1]]

m <- lmer(change ~ bline + treat + arm + (1|subject), data=dd)
ls_means(m, which = NULL, level=0.95, ddf="Kenward-Roger")
##  Least Squares Means table:
##  
##      Estimate Std. Error   df t value   lower   upper  Pr(>|t|)    
## armL  1.37494    0.22716 55.6  6.0527 0.91981 1.83007 1.275e-07 ***
## armR  2.54956    0.22716 55.6 11.2235 2.09443 3.00469 6.490e-16 ***

My best guess at this point is that you are having some problem with model fitting. lmerTest can sometimes be opaque/swallow warnings or error messages. Did you get any warnings you neglected to tell us about? If you re-run the model with lme4::lmer(...) (i.e. use the basic version from lme4, not the augmented version in lmerTest), do you see any warnings?

Ben Bolker
  • 211,554
  • 25
  • 370
  • 453
  • No, I didn't get any warnings with lmer(...) or with lme4::lmer(...). – Statbong Apr 24 '21 at 00:29
  • OK, then we have to have a [mcve]. https://stackoverflow.com/questions/5963269/how-to-make-a-great-r-reproducible-example .... does the example I gave work for you? What is `packageVersion("lmerTest")` ? Can we see the `summary()` output of your model? – Ben Bolker Apr 24 '21 at 00:34
  • Your example works fine. packageVersion("lmerTest") is 3.1.3 – Statbong Apr 24 '21 at 00:58
  • REML criterion at convergence: 11 Scaled residuals: Min 1Q Median 3Q Max -1.76533 -0.42232 -0.03193 0.47177 1.14020 Random effects: Groups Name Variance Std.Dev. subject (Intercept) 0.07807 0.2794 Residual 0.02462 0.1569 Number of obs: 26, groups: subject, 13 – Statbong Apr 24 '21 at 01:05
  • Fixed effects: Estimate Std. Error df t value Pr(>|t|) (Intercept) 3.62064 0.70965 17.91990 5.102 7.55e-05 *** lbline -1.05124 0.22197 17.66964 -4.736 0.000173 *** treatT 0.16935 0.06811 8.34441 2.486 0.036578 * arm'R' -0.07596 0.07848 10.09734 -0.968 0.355718 Correlation of Fixed Effects: (Intr) lbline treatT lbline -0.990 treatT 0.139 -0.204 arm'R' 0.467 -0.527 0.428 – Statbong Apr 24 '21 at 01:05
  • Please **edit your question** to include this information (much easier to read). Since you have a *total* of 26 observations, could you include your data set in your question? That would make it much easier to reproduce the problem ... – Ben Bolker Apr 24 '21 at 16:48
  • Ok, I have added the full data set in the question. Thanks for looking into it. – Statbong Apr 25 '21 at 01:42