Variations on this question has been asked before (e.g. Is there a way of getting "marginal effects" from a `glmer` object), and most of them suggest using ggeffects (or sjPlot). However, a statistician at our faculty is having some trouble agreeing with the output from ggeffects.
I'm new to both statistics and R (so I probably shouldn't be working with glmer...), and I'm having some trouble understanding what I'm doing wrong.
My model in its simplest form is a glmer from lme4: outcome(binary) ~ FE (binary) + (1|RE). The fixed effect is a test performed on some, but not all the individuals from my random effect.
Model output
Intercept: 1.2654
FE_2: -0.2305
RE Std. Dev.: 2.896
ggpredict:
ggpredict(model, type = "fe", terms = "FE")
1: 0.780
2: 0.738
Now, as far as I've understood I can get the marginal effect in probability for test 2 like this:
y <- 1.2654 - 0.2305
prob <- exp(y) / (1 + exp(y))
Which is exactly the same as the output from ggpredict: 0.738.
However, he's saying that this is the conditional probability for each individual, and that I need to "insert something I don't understand" to get the probability I can generalise to a population. His quick example of a solution:
y <- 1.2654 - 0.2305 + rnorm(100000000) * 2.896
prob_trus <- exp(y) / (1 + exp(y))
mean(prob_trus)
0.62
Using ggaverage with the same arguments as the previous ggpredict almost gives me the same probability, 0.639. So, finally for my questions:
- Is ggaverage the same as his solution just with some simulation difference and/or rounding error?
- If his method is the way to go, how do I get the same results from ggeffects?