54

I am using RandomForestClassifier implemented in python sklearn package to build a binary classification model. The below is the results of cross validations:

Fold 1 : Train: 164  Test: 40
Train Accuracy: 0.914634146341
Test Accuracy: 0.55

Fold 2 : Train: 163  Test: 41
Train Accuracy: 0.871165644172
Test Accuracy: 0.707317073171

Fold 3 : Train: 163  Test: 41
Train Accuracy: 0.889570552147
Test Accuracy: 0.585365853659

Fold 4 : Train: 163  Test: 41
Train Accuracy: 0.871165644172
Test Accuracy: 0.756097560976

Fold 5 : Train: 163  Test: 41
Train Accuracy: 0.883435582822
Test Accuracy: 0.512195121951

I am using "Price" feature to predict "quality" which is a ordinal value. In each cross validation, there are 163 training examples and 41 test examples.

Apparently, overfitting occurs here. So is there any parameters provided by sklearn can be used to overcome this problem? I found some parameters here, e.g. min_samples_split and min_sample_leaf, but I do not quite understand how to tune them.

Thanks in advance!

Ray
  • 2,472
  • 18
  • 22
Munichong
  • 3,861
  • 14
  • 48
  • 69
  • Have you tried using ExtraTreesClassifier? That will help if you have multiple predictors. If you're only training on one predictor and you only have 200 samples, I think you're always going to have some degree of overfitting. – BrenBarn Dec 09 '13 at 04:41
  • 2
    The variance in your test accuracy is large but your sample set is very small. In case you meant to suggest _overfitting_ as the big difference in train/test accuracy, that is **not** overfitting: consider using nearest neighbors, you will always get 0 for training error. (so train accuracy is not meaningful here.) – Falcon Dec 19 '13 at 03:29
  • Are you saying that you are trying to predict "quality" using only "Price"? If so then a random forest is not the best way. Try a logistic regression classifier. – denson Dec 17 '16 at 01:49
  • If you actually have multiple X variables that you are using to predict "quality" and you have imbalanced classes (more class = 0 than class = 1 or vice versa) then try using a StratifiedShuffleSplit during cross validation. – denson Dec 17 '16 at 01:52

2 Answers2

130

I would agree with @Falcon w.r.t. the dataset size. It's likely that the main problem is the small size of the dataset. If possible, the best thing you can do is get more data, the more data (generally) the less likely it is to overfit, as random patterns that appear predictive start to get drowned out as the dataset size increases.

That said, I would look at the following params:

  1. n_estimators: @Falcon is wrong, in general the more trees the less likely the algorithm is to overfit. So try increasing this. The lower this number, the closer the model is to a decision tree, with a restricted feature set.
  2. max_features: try reducing this number (try 30-50% of the number of features). This determines how many features each tree is randomly assigned. The smaller, the less likely to overfit, but too small will start to introduce under fitting.
  3. max_depth: Experiment with this. This will reduce the complexity of the learned models, lowering over fitting risk. Try starting small, say 5-10, and increasing you get the best result.
  4. min_samples_leaf: Try setting this to values greater than one. This has a similar effect to the max_depth parameter, it means the branch will stop splitting once the leaves have that number of samples each.

Note when doing this work to be scientific. Use 3 datasets, a training set, a separate 'development' dataset to tweak your parameters, and a test set that tests the final model, with the optimal parameters. Only change one parameter at a time and evaluate the result. Or experiment with the sklearn gridsearch algorithm to search across these parameters all at once.

Simon
  • 2,840
  • 2
  • 18
  • 26
  • 9
    A phenomenal answer. My only addition is that modern hyperparameter tuning has introduced better methods beyond grid and random search. Bayesian Optimization and Hyperband are two such techniques. Generally, successive halving techniques have been found to perform well. – Dave Liu Dec 09 '19 at 19:48
1

Adding this late comment in case it helps others.

In addition to the parameters mentioned above (n_estimators, max_features, max_depth, and min_samples_leaf) consider setting 'min_impurity_decrease'.

Doing this manually is cumbersome. So use sklearn.model_selection.GridSearchCV to test a range of parameters (parameter grid) and find the optimal parameters.

You can use 'gini' or 'entropy' for the Criterion, however, I recommend sticking with 'gini', the default. In the majority of cases, they produce the same result but 'entropy' is more computational expensive to compute.

Max depth works well and is an intuitive way to stop a tree from growing, however, just because a node is less than the max depth doesn't always mean it should split. If the information gained from splitting only addresses a single/few misclassification(s) then splitting that node may be supporting overfitting. You may or may not find this parameter useful, depending on the size of your dataset and/or your feature space size and complexity, but it is worth considering while tuning your parameters.

broti
  • 1,338
  • 8
  • 29