I am using this code to train an xgboost model on a GPU
The problem is that both GPU (NVIDIA 1050) and CPU cores are being used at the same time. NVIDIA system monitor shows a utilization of 85 to 90% and linux system monitor shows all cores working.
there are two problems here
1 .Why is xgb_cv using both when the tree_method defined is 'gpu_hist'
- When the model is trained with 'hist' and not 'gpu_hist' it finishes in half the time using CPU cores only
Thanks
model_xgb = XGBClassifier(verbose= 1,objective = 'multi:softmax',num_classes=3,tree_method='gpu_hist',predictor = 'gpu_predictor')
xgb_cv = GridSearchCV(model_xgb,
{"colsample_bytree":[0.8,0.6]
,"min_child_weight":[0,5 ]
,'max_depth': [3,4,]
,'n_estimators': [500]
,'learning_rate' :[0.01, 0.1]},cv = 2,verbose = 1)
## Fit with cross validation
start_time = time.time()
xgb_cv.fit(X_train,Y_train,verbose = 1)
duration = (time.time() - start_time)/60
print("XGBOOST HyperParameter Tuning %s minutes ---" % + duration)