0

I work with domain knowledge experts that work mostly with R2 and explained variance as metrics. Hence, when I share models with them, I want my model to minimize a specific metric (R2). I often use DecisionTreeRegressor from scikit-learn, and the criterion parameter only accepts the following :

{“squared_error”, “friedman_mse”, “absolute_error”, “poisson”}

source

Is there an easy workaround to create our own cost function to optimize a model (with R2 in my case)?

asl
  • 471
  • 2
  • 4
  • 13
  • You sound confused; just because your *business* metric happens to be the R2, it does not mean that you can use this *metric* as the optimization criterion (loss); plus, I have *never* seen R2 being used as such (I guess it can even be shown that it is impossible mathematically). Please see own answer [here](https://stackoverflow.com/a/47819022/4685471) for a similar discussion about the business metric (accuracy) and the loss. In short, you neither can or need to do any modification here. – desertnaut Oct 11 '22 at 20:59
  • 1
    [R2 is just a linear scaling of MSE](https://stats.stackexchange.com/q/250730/232706), so just use `squared_error`. (cc @desertnaut) – Ben Reiniger Oct 12 '22 at 03:33

0 Answers0