1

Having used lmerTest::lmer() to perform linear regression with repeated measures data I would like to adjust for multiple comparisons.
I ran several models and used Bonferroni-Holm to adjust each of them, see my approach below.
Eventually, I would like to generate a regression table with modelsummary which should include the adjusted p-values and additional goodness-of-fit statistics (alike in this SO post).

– Maybe there is also a much easier way in modelsummary() to adjust the p-values that I am not aware of yet? (both for models individually as well as for/across a group of models altogether)

MWE

library("modelsummary")
library("lmerTest") 
library("parameters")
library("performance")

mod1  <- lmer(mpg ~ hp + (1 | cyl), data = mtcars) 
mod2  <- lmer(mpg ~ hp + (1 | gear), data = mtcars) 

l_mod <- list("mod1" = mod1, "mod2" = mod2)
# msummary(l_mod) # works well

adjMC <- function( model ) {
model_glht <- glht(model)
model_mc_adj <- summary(model_glht, test = adjusted('holm')) # Bonferroni-Holm is less conservative and uniformly more powerful than Bonferroni
return(model_mc_adj)
}
mod1_adj <- adjMC(mod1)
mod2_adj <- adjMC(mod2)

l_mod_adj <- list("mod1_adj" = mod1_adj, "mod2_adj" = mod2_adj)
 

However, upon adjusting the p-values the class from the model changes from "lmerModLmerTest" to "summary.glht"

class(mod1) # => "lmerModLmerTest"
class(mod1_adj) # => "summary.glht"

"summary.glht" is among the list of supported models of modelsummary, and I succeed in obtaining the estimates and p-values:

parameters::model_parameters(mod1_adj)
# Parameter        | Coefficient |   SE |         95% CI | Statistic | df |      p
# --------------------------------------------------------------------------------
# (Intercept) == 0 |       24.71 | 3.13 | [17.84, 31.57] |      7.89 |  0 | < .001
# hp == 0          |       -0.03 | 0.01 | [-0.06,  0.00] |     -2.09 |  0 | 0.064
# e.g. with modelsummary:
modelsummary::get_estimates(mod1_adj) # gives more info than broom::tidy(mod1_adj)

However, obtaining the goodness-of-fit statistics did not succeed:

performance::model_performance(mod1_adj)
# 'r2()' does not support models of class 'summary.glht'.
# Can't extract residuals from model.
# no 'nobs' method is availableModels of class 'summary.glht' are not yet supported.NULL

broom::glance(mod1_adj) # and also for broom.mixed::glance(mod1_adj)
# => Error: No glance method for objects of class summary.glht

# e.g. with modelsummary:
modelsummary::get_gof(mod1_adj) 
# => Cannot extract information from models of class "summary.glht".

To be able to include the adjusted p-values in the final regression table I tried to generate a custom class for "summary.glht" with custom functions to extract estimates and goodness-of-fit information. I scanned summary(mod1_adj) for the required information, e.g., summary(mod1_adj)$coef but did not find all the info needed to create the fcts.

names(mod1_adj$test)
# [1] "pfunction"    "qfunction"    "coefficients" "sigma"        "tstat"        "pvalues"      "type" 

tidy.summary.glht <- function(x, ...) {
    s <- summary(x, ...)
    ret <- tibble::tibble(term = ...,
                          estimate = s$test$coefficients,
                          ...      = ... ,
                          p-values = s$test$pvalues)
    ret
}

glance.summary.glht <- function(x, ...) {
  data.frame(
    "Model" = "summary.glht",
    ... = ...,
    "nobs" = stats::nobs(x)
  )
}
mavericks
  • 1,005
  • 17
  • 42

1 Answers1

1

The problem is that the broom package does have a tidy method for glht models, but does not have a glance method for such models. Since it is only partially supported, modelsummary can only extract estimates, but not goodness-of-fit statistics, so it breaks. To see this, you could try the get_gof and get_estimates from modelsummary on a ghlt object.

In contrast, modelsummary can easily extract both estimates and goodness-of-fit from lmerModLmerTest models. One approach would thus be to pass a lmerModLmerTest object to modelsummary, but to modify the p values on the fly by defining a tidy_custom.CLASSNAME method as described in the modelsummary documentation.

Estimate model:

library(modelsummary)
library(lmerTest) 
library(multcomp)

mod  <- lmer(mpg ~ hp + (1 | cyl), data = mtcars) 

Then, we define a tidy_custom method appropriate for your model class (again, see the documentation linked above for details).

Note that, for some reason, the term names are modified slightly when extracting results from a glht object instead of a lmerModLmerTest model. This is a problem in the upstream package, so you may want to report it there (not sure if it's broom or performance, but it would be easy to check). In any case, this is easy to fix for our purposes by simply adding a gsub call to our new method:

tidy_custom.lmerModLmerTest <- function(x, ...) {
  new <- multcomp::glht(x)
  new <- summary(new, test = adjusted('holm'))
  out <- get_estimates(new)
  out$term <- gsub(" == 0", "", out$term)
  out
}

modelsummary(mod, statistic = "p.value")
Model 1
(Intercept) 24.708
(0.000)
hp -0.030
(0.064)
Num.Obs. 32
R2 Marg. 0.143
R2 Cond. 0.674
AIC 181.9
BIC 187.8
ICC 0.6
Vincent
  • 15,809
  • 7
  • 37
  • 39
  • Thank you for the quick and informative response! This got me a lot closer! If I employ your code I receive the following error though: `Error in sanity_tidy(est, est_custom, estimate, statistic, class(model)[1]): Assertion on 'tidy_custom' failed: Must have exactly 4 rows, but has 2 rows.` What am I missing? – mavericks Jan 16 '21 at 07:31
  • Complete error message: `7. stop(simpleError(sprintf(msg, ...), call.)) 6. mstop("Assertion on '%s' failed: %s.", var.name, res, call. = sys.call(-2L)) 5. makeAssertion(x, res, .var.name, add) 4. checkmate::assert_data_frame(tidy_custom, nrows=nrow(tidy_output), min.cols=2, null.ok=T) 3. sanity_tidy(est, est_custom, estimate, statistic, class(model)[1]) 2. extract_estimates(model=models[[i]], fmt=fmt, estimate=estimate[[i]], statistic=statistic, vcov=vcov[[i]], conf_level=conf_level, stars=stars, ...) 1. modelsummary(mod, statistic = "p.value") ` – mavericks Jan 16 '21 at 07:34
  • 1
    Thanks that's useful. I just pushed a new version of the package to Github. Could you install it and try again? After you install, make sure you restart the R session completely and that none of your old objects are loaded (through history or `.RData` for example. `library(remotes);install_github("vincentarelbundock/modelsummary")` – Vincent Jan 16 '21 at 14:24
  • Thanks a lot! Will try just now and report back. Am also using your great plotting ct `modelsummary::modelplot()`. Is there a way to have a different symbol for each model in the plot? – mavericks Jan 16 '21 at 14:27
  • 1
    The [website shows many ways to customize the output of `modelplot`](https://vincentarelbundock.github.io/modelsummary/articles/modelplot.html). But to be perfectly honest, when I really want to customize things, I just do `modelplot(mod, draw=FALSE)`. That gives me a clean data frame that I can feed immediately to `ggplot2`. – Vincent Jan 16 '21 at 15:09
  • Also, will try the `modelplot(mod, draw=FALSE)` version. For some reason, `modelplot(list(model_A,model_B))` would mix up the order of models, such that the coefficients for model_A would have the colors for model_B, which is very misleading (luckily noticed since the summary table with the correct coefficients for each model was placed just above in the report). – mavericks Jan 16 '21 at 15:35
  • Weird. I have never seen that color mix up before. If you could submit an issue to github with a replicable example that would be super useful. Now worries if impossible. – Vincent Jan 16 '21 at 16:04
  • [Done](https://github.com/vincentarelbundock/modelsummary/issues/200) - thanks for any advice! – mavericks Jan 16 '21 at 17:16