The folloiwng code is not working, where aucerr
and aoeerr
are custom evaluation metrics, it is working with just one eval_metric
either aucerr
or aoeerr
prtXGB.fit(trainData, targetVar, early_stopping_rounds=10,
eval_metric= [aucerr, aoeerr], eval_set=[(valData, valTarget)])
However, the following code with in-built evaluation metrics is working
prtXGB.fit(trainData, targetVar, early_stopping_rounds=10,
eval_metric= ['auc', 'logloss'], eval_set=[(valData, valTarget)])
Here are my custom functions
def aucerr(y_predicted, y_true):
labels = y_true.get_label()
auc1 = metrics.roc_auc_score(labels,y_predicted)
return 'AUCerror', abs(1-auc1)
def aoeerr(y_predicted, y_true):
labels = y_true.get_label()
actuals = sum(labels)
predicted = sum(y_predicted)
ae = actuals/predicted
return 'AOEerror', abs(1-ae)