In my data I have problems with heteroscedasticity as indicated by the Breusch-Pagan test and the NVC test that are both significant.
Therefore, I would like to follow the method posted by Gavin Simpson here: Regression with Heteroskedasticity Corrected Standard Errors
This seems to work but now I have troubles interpreting the results as they look very different from my original multiple regression results.
mySummary(model_maineffect, vcovHC)
t test of coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.5462588 0.0198430 -27.5291 < 2.2e-16 ***
IV1 0.0762802 0.0082630 9.2315 < 2.2e-16 ***
Control1 -0.0062260 0.0071657 -0.8689 0.38493
Control2 0.0277049 0.0066251 4.1818 2.910e-05 ***
Control3 0.0199855 0.0104345 1.9153 0.05547 .
Control4 -0.4639035 0.0083046 -55.8608 < 2.2e-16 ***
Control5 0.6239948 0.0072652 85.8876 < 2.2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Wald test
Model 1: DV ~ IV1 + Control1 + Control2 +
Control3 + Control4 + Control5
Model 2: DV ~ 1
Res.Df Df F Pr(>F)
1 14120
2 14128 -8 1304.6 < 2.2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Can I interpret them just in the same way as a multiple regression, i.e., IV1 has a highly significant effect on DV since Pr(>|t|) of IV1 is <0.001. And does it mean that the model is significantly improved since the Pr(>F) is <0.001? How could I report my R-Square in this case?