Doing a multiple regresion analysis in r, I run two different models
Model 1.
lm(response ~ explanatorynumeric + explanatorycategorical, dataset)
Model 2.
lm(response ~ explanatorynumeric + explanatorycategorical + 0, dataset)
Adding +0
to the model was a recomendation of a datacamp course. This indicates r
not to estimate the intercept when there is a chategorical explanatory variable. Exept for the +0
both models are equal.
predicts()
for Model 1 and for Model 2 are exactly the same.
However, I get a rsquared
much larger for Model 2 (about 0,8) than for Model 1 (about 0,37).
I can't understand why is there such a difference between rsquared in each model.
If this makes sense to any of you I'll apreciate an explanation.