5

The folloiwng code is not working, where aucerr and aoeerr are custom evaluation metrics, it is working with just one eval_metric either aucerr or aoeerr

prtXGB.fit(trainData, targetVar, early_stopping_rounds=10, 
eval_metric= [aucerr, aoeerr], eval_set=[(valData, valTarget)])

However, the following code with in-built evaluation metrics is working

prtXGB.fit(trainData, targetVar, early_stopping_rounds=10, 
eval_metric= ['auc', 'logloss'], eval_set=[(valData, valTarget)])

Here are my custom functions

def aucerr(y_predicted, y_true):
    labels = y_true.get_label()
    auc1 = metrics.roc_auc_score(labels,y_predicted)
    return 'AUCerror', abs(1-auc1)

def aoeerr(y_predicted, y_true):
    labels = y_true.get_label()
    actuals = sum(labels)
    predicted = sum(y_predicted)
    ae = actuals/predicted
    return 'AOEerror', abs(1-ae)
BigDataScientist
  • 1,045
  • 5
  • 17
  • 37
  • 1
    Can you show the code for your custom functions? What do they return? Also post the full stack trace of error. – Vivek Kumar Jun 15 '17 at 08:02
  • I don't think it is the problem, as I mentioned they working fine if I use them individually. – BigDataScientist Jun 15 '17 at 12:38
  • 1
    Aah yes, My bad. Try wrapping the eval_set in a tuple like this: `eval_set=[(valData, valTarget)]`. And post the full stack trace of error. – Vivek Kumar Jun 15 '17 at 13:41
  • I am doing that already, and it is working fine with one custom metric. When I add two custom metrics as mentioned in question, Python stopping abruptly and producing the error below `Tree method is automatically selected to be 'approx' for faster speed. to use old behavior(exact greedy algorithm on single machine), set tree_method to 'exact' [10:38:56] c:\dev\libs\xgboost\dmlc‑core\include\dmlc\./logging.h:235: [10:38:56] C:\dev\libs\xgboost\src\metric\metric.cc:21: Unknown metric function ` – BigDataScientist Jun 15 '17 at 14:40

0 Answers0