1

I noticed when trying to back-transform predictions for my zero-truncated negative binomial model on the link scale to the response scale, the values are not the same:

set.seed(1)
data <- data.frame("x" = rep(1:5, 2), "y" = as.integer(runif(10)*100+1), "cat" = rep(c("a", "b"), each= 5))
mod <- glmmTMB::glmmTMB(y ~ x + (1|cat), data = data, family = truncated_nbinom2("log"))

new_data <- data.frame("x" = 1:3, "cat" = NA)
predict(mod, newdata = new_data, type = "response", re.form = NA)
exp(predict(mod, newdata = new_data, type = "link", re.form = NA))

Output:

> predict(mod, newdata = new_data, type = "response", re.form = NA)
[1] 79.08138 65.49391 54.24264
> exp(predict(mod, newdata = new_data, type = "link", re.form = NA))
[1] 79.07609 65.48655 54.23242

Why are they not the same? I specified the link function to be "log", then "exp" should invert it? The reason I don't directly predict to the response scale is that I want to calculate confidence intervals for an effect plot. Therefore I wanted to calculate the lower and upper confidence iterval using exp(fit +/- 2* se.fit).

Edit 1: This also happens when starting in a "fresh" environment:

enter image description here

Edit 2: This does not happen when using not truncated distributions, e.g. family = "poisson" or family = nbinom2("log"). But other truncated distributions, e.g. family = truncated_compois("log"), also do not work.

Zoe
  • 906
  • 4
  • 15
  • 1
    I can't replicate this I'm afraid. I get identical outputs from both lines of code. – Allan Cameron Jul 17 '23 at 08:57
  • Can you give me what your output is when you call `.Machine$double.eps`? This is the only reason I can think of. Investigating the object, I found the inverse link function here: `mod$modelInfo$family$linkinv` and it uses `.Machine$double.eps`. – Zoe Jul 17 '23 at 09:48
  • I get `[1] 2.220446e-16` – Allan Cameron Jul 17 '23 at 10:31
  • 1
    Actually, I seem to sometimes get a discrepancy depending on the random seed set. Can you include `set.seed(1)` at the start of your script and update your question so that we can see if this is reproducible? – Allan Cameron Jul 17 '23 at 10:35
  • Thank you for your help! Unfortunately, the value is the same for me, so this does not seem to be the issue. I changed the code to a fixed seed! Although I would like to mention that sometimes the error is really small, but sometimes the values are almost 1 count higher, which is not so good.... (e.g. seed 222 is much worse) – Zoe Jul 17 '23 at 10:39
  • 1
    Thanks Zoe. I'm now getting exactly the same results as you. I'll look into this – Allan Cameron Jul 17 '23 at 10:48
  • Do you have any update on this yet? If you think this is a bug, I could also open an issue on the glmmTMB github? – Zoe Jul 26 '23 at 05:58

0 Answers0