0
auc = roc_auc_score(CV_label, y_pred_proba) * float(100)

However, I am said that I cannot find the error using (100 - roc_auc_score). I found a previously answered question about the equal error rate. Link: Equal Error Rate in Python. How do I find the error using only the roc_auc_score.

  • Can you clarify what error you want to calculate exactly? I assume you are using "error rate" to mean 1 - accuracy, ie the proportion of misclassified examples? What does the EER have to do with it here? – Calimo Oct 30 '18 at 07:07
  • I need to find the hyper param with the least error when ROC is used as metric. I have an imbalanced data with 60:20:20::train:CV:test as split. I am using KNN algorithm and I have to find the optimal 'K' value (the one with the least error) on CV data. How do I find it? – TheHumanSpider Oct 30 '18 at 08:05
  • Calimo, I am using simple Cross Validation. – TheHumanSpider Oct 30 '18 at 08:11

1 Answers1

0

You can't. The ROC AUC is a summary of the ROC curve, and it is impossible to recover the information about a single point from the summarized metric.

You should use the accuracy_score metric instead:

accuracy = accuracy_score(CV_label, y_pred_proba)
Calimo
  • 7,510
  • 4
  • 39
  • 61