1

Consider I have a linear mixed model with two continuous variables and use contrast coding for two factors with each two categories respectively (A,B). A random effect is optional.

contrasts(data$fac1) <- c(-.5,.5)
contrasts(data$fac2) <- c(-.5,.5)

model<-lme(Y~x1+x2+x1:fac1+x2:fac1+x1:fac2+x2:fac2+fac1+fac2+fac1:fac2, random=~1|group,data)

then the output will give me the main effects for x1 and x2 and the difference between slopes for fac1 and fac2.

But how can I calculate individual p-values for say the slope of x1 fac1=="A" and fac2=="B" ?

Is there an R package or do I have to calculate them manually ?

And if yes how? -following calls to vcov() adding up respective matrix entries and call to pt() (which df to use)? Thanks!

Jmmer
  • 11
  • 3
  • I am curios about your formula, why specify each interaction with e.g. `x1 + fac1 + x1:fac1` when you can achieve the same with `x1*fac1`? – mhovd Jul 06 '22 at 17:51
  • Also, I believe that `emmeans` is what you are looking for. – mhovd Jul 06 '22 at 17:54
  • 1
    -you are right, it is just to illustrate all coefficients needed and because I do not want an interaction between x1 and x2, so I have not written say (x1+x2+fac1+fac2)^2. – Jmmer Jul 06 '22 at 17:59
  • thanks I will try emmeans, as this package is quite extensive do you may have an easy working example? -and do you know if I would need to correct for multiple testing in this case? – Jmmer Jul 06 '22 at 18:07

1 Answers1

1

You could try the marginaleffects package. (Disclaimer: I am the author.)

There are many vignettes on the website, including one with simple examples of mixed effects models with the lme4 package: https://vincentarelbundock.github.io/marginaleffects/articles/lme4.html

You can specify the values of covariates using the newdata argument and the datagrid function. The covariates you do not specify in datagrid will be held at their means or modes:

library(lme4)
library(marginaleffects)

mod <- glmer(am ~ mpg * hp + (1 | gear),
             data = mtcars,
             family = binomial)

marginaleffects(mod, newdata = datagrid(hp = c(100, 110), gear = 4))
#>   rowid     type term        dydx  std.error statistic   p.value
#> 1     1 response  mpg 0.077446700 0.33253683 0.2328966 0.8158417
#> 2     2 response  mpg 0.337725702 0.90506056 0.3731526 0.7090349
#> 3     1 response   hp 0.006199167 0.02647471 0.2341543 0.8148652
#> 4     2 response   hp 0.025604198 0.06770870 0.3781522 0.7053175
#>      conf.low  conf.high      mpg  hp gear
#> 1 -0.57431351 0.72920691 20.09062 100    4
#> 2 -1.43616041 2.11161181 20.09062 110    4
#> 3 -0.04569032 0.05808865 20.09062 100    4
#> 4 -0.10710242 0.15831082 20.09062 110    4
Vincent
  • 15,809
  • 7
  • 37
  • 39
  • thanks, this is very helpful, but how would I combine the "hypothesis = .." for the problem above? -also the package does not support nlme, but I could switch to lme4. – Jmmer Jul 06 '22 at 20:02
  • @Jmmer there is a very detailed vignette on the `hypothesis` argument: https://vincentarelbundock.github.io/marginaleffects/articles/hypothesis.html – Vincent Jul 06 '22 at 20:41
  • 1
    @Zheyuan Li I wrote a detailed comparison to some of the other packages out there. My (obviously very biased view) is that `emmeans` is more powerful in some areas, but that `marginaleffects` is more user-friendly and supports more models. `emmeans` used to be the only way to do complex contrasts and hyp tests, but `marginaleffects` can now do that as well. `margins` is very similar to `marginaleffects` but less powerful. Most other options are wrappers around `emmeans`. https://vincentarelbundock.github.io/marginaleffects/articles/alternative_software.html [edited the link] – Vincent Jul 06 '22 at 20:45
  • thanks, I saw the vignette, but not sure how the problem above can be tackled according to this. May you add a minimal example according to the specified problem in our answer? – Jmmer Jul 06 '22 at 21:02
  • OK, I edited my answer. I don't think you want the `hypothesis` argument for that. You just want to specify values of the covariates at which to compute the slope. This is done in the `newdata` argument. – Vincent Jul 06 '22 at 21:19
  • thanks! so running this on my model example would mean to do : marginaleffects(model, newdata = datagrid(fac1 = "A", fac2="B",x1=c(-1,1)) ? Unfortunately this ends up in an error message: Error: Unable to compute adjusted predictions for this model. Either the `newdata` does not meet the requirements of the model's `predict()` method, or this model is not supported. If you believe this model should be supported, you can file a report on the Github Issue Tracker: https://github.com/vincentarelbundock/marginaleffects/issues – Jmmer Jul 06 '22 at 21:40
  • You probably need to specify a value for the group variable, as I do with `gear` in my example. Or request a global mean using extra arguments as described in the vignette. – Vincent Jul 06 '22 at 22:01
  • - I do this see comment above, but not working, unfortunately. – Jmmer Jul 06 '22 at 23:57
  • Is there a way to calculate this manually otherwise? via vcov? – Jmmer Jul 07 '22 at 00:02
  • maybe I can use bootstrapping from the lmeresampler package? – Jmmer Jul 07 '22 at 03:08
  • It's impossible to diagnose the problem without a minimal working example. https://stackoverflow.com/questions/5963269/how-to-make-a-great-r-reproducible-example/5963610#5963610 – Vincent Jul 07 '22 at 04:58