I am trying to train a decision tree classifier for evaluating baseball players using scikit-learn's provided function. However, I would like to "pre-specify" or "force" some splits ahead of time, based on what I know to be true about the way experts think (these need to be incorporated regardless). For example, I want to force a split based on batting average > .300.
A related question is --can I "pre-load" a previously trained decision tree model and merely "update" it in a subsequent training? Or does the decisio tree classifier need to re-learn all the rules each time I run it? The analogy I'm trying to make here is to transfer learning, but applying it decision trees.