1

So I have this regression model

reg <-lm(NominalVar~Var1+Var2)
summary(reg)

What I get with summary is at test the t-test (estimate/std.error) and resulting p value for each variable. Is there a way to do t-test with unequal variances (Welch's test), instead of a regular two samples t-test? In my case Var2 is a control variable, so I want a Welch's t-test after controlling for a variable.

magasr
  • 493
  • 5
  • 21
  • 1
    This question is difficult to answer. "Two-sample t-test" sounds like a standard t-test done outside the regression context, but "controlling for a variable" indicates a regression. In the first case, the Frisch-Waugh-Lovell theorem comes to mind, though I am not sure it's applicable here. In the second case, have a look at so-called "robust standard errors", which allow you to get t-tests robust to heteroskedasticity. In either case, this appears to be more about statistics than actual programming. If you struggle with the latter, please post formulas and we can help you with the computation. – coffeinjunky Sep 01 '16 at 15:37
  • Thank you for your input. Robust standard errors are exactly what I was looking for! – magasr Sep 01 '16 at 16:00
  • 2
    In that case, have a look here: http://stackoverflow.com/questions/37528990/robust-and-clustered-standard-error-in-r-for-probit-and-logit-regression/37529874#37529874 – coffeinjunky Sep 01 '16 at 16:01
  • Yup, that's it. Thank you again. – magasr Sep 01 '16 at 16:07

1 Answers1

2

The answer are robust standard errors, as suggested by coffeinjunky in a comment to my question.

library(lmtest)
library(sandwich)
reg <-lm(NominalVar~Var1+Var2)
reg$rse <- vcovHC(reg, type="HC1")
coeftest(reg, reg$rse) 

Source:http://www.princeton.edu/~otorres/Regression101R.pdf

magasr
  • 493
  • 5
  • 21