0

Doing a multiple regresion analysis in r, I run two different models

Model 1.

lm(response ~ explanatorynumeric + explanatorycategorical, dataset)

Model 2.

lm(response ~ explanatorynumeric + explanatorycategorical + 0, dataset)

Adding +0 to the model was a recomendation of a datacamp course. This indicates r not to estimate the intercept when there is a chategorical explanatory variable. Exept for the +0 both models are equal. predicts() for Model 1 and for Model 2 are exactly the same. However, I get a rsquared much larger for Model 2 (about 0,8) than for Model 1 (about 0,37).

I can't understand why is there such a difference between rsquared in each model.

If this makes sense to any of you I'll apreciate an explanation.

Waldi
  • 39,242
  • 6
  • 30
  • 78
  • https://stats.idre.ucla.edu/other/mult-pkg/faq/general/faq-why-are-r2-and-f-so-large-for-models-without-a-constant/ https://stats.stackexchange.com/questions/26176/removal-of-statistically-significant-intercept-term-increases-r2-in-linear-mo – Ben Bolker Jan 27 '21 at 19:46
  • The short answer is that R squared is calculated differently for intercept and no-intercept models so it is not meaningful to compare them. The value of excluding the intercept is that the categorical variable will get a full set of levels in the 0 intercept model. As you have noticed, in this case the two models only differ in parameterizations and provide the same predictions. – G. Grothendieck Jan 27 '21 at 19:59
  • Should I report the bigger rsquared then? I understand that reporting a 0,8 r2 means that your model can explane 80% of the variance. How can it be that almost the same model doing the same predictions can vary in the amount of variance that can explain. It doesn´t make any sense to me and I believe that reviewers will not believe such a big Rsquared. Thanks a lot for answering by the way – Martin Puddington Jan 27 '21 at 20:07
  • Use the R squared from the intercept model. The R squared for the 0 intercept model does not have the variance reduction interpretation that you refer to. – G. Grothendieck Jan 27 '21 at 20:16
  • Thanks a lot again. the 0 intercept r^2 was too good to be true – Martin Puddington Jan 27 '21 at 22:15

0 Answers0