I am wondering if my K-Fold implementation is correct:
from sklearn.model_selection import KFold
kf = KFold(n_splits=numFolds, shuffle=False, random_state=7)
sales_prediction_model = xgb.XGBRegressor(
silent=False,
learning_rate=0.03,
n_estimators=10000,
max_depth=4,
# sub_sample=0.8,
gamma=1,
colsample_bytree=0.8,
n_jobs=30
)
for train_index, test_index in kf.split(X_train):
X_tr, X_te = X_train.iloc[train_index], X_train.iloc[test_index]
y_tr, y_te = y_train.iloc[train_index], y_train.iloc[test_index]
eval_set = [(X_tr, y_tr), (X_te, y_te)]
sales_prediction_model.fit(X_tr, y_tr, verbose=False,
early_stopping_rounds=15,eval_set=eval_set, eval_metric="mae")
Is the fit function continues the training, or starts again from scratch?
Thanks for your help.
(The xgboost documentations only states : "Fit gradient boosting model")