I am using XGBoost with early stopping. After about 1000 epochs, the model is still improving, but the magnitude of improvement is very low. I.e.:
clf = xgb.train(params, dtrain, num_boost_round=num_rounds, evals=watchlist, early_stopping_rounds=10)
Is it possible to set a "tol" for early stopping? I.e.: the minimum level of improvement that is required to not trigger early stopping.
Tol is a common parameter in SKLearn models, such as MLPClassifier and QuadraticDiscriminantAnalysis. Thank you.