2

I'm following the ROC graph example given in sklearn docs here (you can download a Jupyter notebook from here). It generates a ROC graph for the multi-class problem over the Iris dataset.

In the original example, the predictions are generated using the SVM classifier's decision_function method, which generate this graph:

roc graph by decision_function

When I change it to generate predictions using predict_proba, the ROC graph changes dramatically to (mostly in class 1):

roc graph by predict_proba

I do not understand why this happens. Prediction probabilities are determined by the decision function, so how come there's such a huge change in class 1?

EDIT: The change is: classifier.fit(X_train, y_train).decision_function(X_test) becomes y_score = classifier.fit(X_train, y_train).predict_proba(X_test)

EDIT 2: Full code I'm running -

print(__doc__)

import numpy as np
import matplotlib.pyplot as plt
from itertools import cycle

from sklearn import svm, datasets
from sklearn.metrics import roc_curve, auc
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import label_binarize
from sklearn.multiclass import OneVsRestClassifier
from scipy import interp
from sklearn.metrics import roc_auc_score

# Import some data to play with
iris = datasets.load_iris()
X = iris.data
y = iris.target

# Binarize the output
y = label_binarize(y, classes=[0, 1, 2])
n_classes = y.shape[1]

# Add noisy features to make the problem harder
random_state = np.random.RandomState(0)
n_samples, n_features = X.shape
X = np.c_[X, random_state.randn(n_samples, 200 * n_features)]

# shuffle and split training and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.5,
                                                    random_state=0)

# Learn to predict each class against the other
classifier = OneVsRestClassifier(svm.SVC(kernel='linear', probability=True,
                                 random_state=random_state))
y_score = classifier.fit(X_train, y_train).predict_proba(X_test)  # Here's my change

# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
    fpr[i], tpr[i], _ = roc_curve(y_test[:, i], y_score[:, i])
    roc_auc[i] = auc(fpr[i], tpr[i])

# Compute micro-average ROC curve and ROC area
fpr["micro"], tpr["micro"], _ = roc_curve(y_test.ravel(), y_score.ravel())
roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])

plt.figure()
lw = 2
plt.plot(fpr[2], tpr[2], color='darkorange',
         lw=lw, label='ROC curve (area = %0.2f)' % roc_auc[2])
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()

# First aggregate all false positive rates
all_fpr = np.unique(np.concatenate([fpr[i] for i in range(n_classes)]))

# Then interpolate all ROC curves at this points
mean_tpr = np.zeros_like(all_fpr)
for i in range(n_classes):
    mean_tpr += interp(all_fpr, fpr[i], tpr[i])

# Finally average it and compute AUC
mean_tpr /= n_classes

fpr["macro"] = all_fpr
tpr["macro"] = mean_tpr
roc_auc["macro"] = auc(fpr["macro"], tpr["macro"])

# Plot all ROC curves
plt.figure()
plt.plot(fpr["micro"], tpr["micro"],
         label='micro-average ROC curve (area = {0:0.2f})'
               ''.format(roc_auc["micro"]),
         color='deeppink', linestyle=':', linewidth=4)

plt.plot(fpr["macro"], tpr["macro"],
         label='macro-average ROC curve (area = {0:0.2f})'
               ''.format(roc_auc["macro"]),
         color='navy', linestyle=':', linewidth=4)

colors = cycle(['aqua', 'darkorange', 'cornflowerblue'])
for i, color in zip(range(n_classes), colors):
    plt.plot(fpr[i], tpr[i], color=color, lw=lw,
             label='ROC curve of class {0} (area = {1:0.2f})'
             ''.format(i, roc_auc[i]))

plt.plot([0, 1], [0, 1], 'k--', lw=lw)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Some extension of Receiver operating characteristic to multi-class')
plt.legend(loc="lower right")
plt.show()

y_prob = classifier.predict_proba(X_test)

macro_roc_auc_ovo = roc_auc_score(y_test, y_prob, multi_class="ovo",
                                  average="macro")
weighted_roc_auc_ovo = roc_auc_score(y_test, y_prob, multi_class="ovo",
                                     average="weighted")
macro_roc_auc_ovr = roc_auc_score(y_test, y_prob, multi_class="ovr",
                                  average="macro")
weighted_roc_auc_ovr = roc_auc_score(y_test, y_prob, multi_class="ovr",
                                     average="weighted")
print("One-vs-One ROC AUC scores:\n{:.6f} (macro),\n{:.6f} "
      "(weighted by prevalence)"
      .format(macro_roc_auc_ovo, weighted_roc_auc_ovo))
print("One-vs-Rest ROC AUC scores:\n{:.6f} (macro),\n{:.6f} "
      "(weighted by prevalence)"
      .format(macro_roc_auc_ovr, weighted_roc_auc_ovr))
shakedzy
  • 2,853
  • 5
  • 32
  • 62
  • Can you give a code sample of exactly what you changed. I copied the code and only changed decision_function to predict_proba and I am getting consistent results with what decision_function had. – jawsem Mar 28 '20 at 15:53
  • I'm only changing `y_score = classifier.fit(X_train, y_train).decision_function(X_test)` to `y_score = classifier.fit(X_train, y_train).predict_proba(X_test)`. That's it – shakedzy Mar 28 '20 at 15:58
  • Please do *not* post code in the comments - edit & update your post instead! – desertnaut Mar 28 '20 at 16:02
  • I did the same thing as you and I am not seeing the same result. I see the same graph between the decision_function and predict_proba (which makes sense). – jawsem Mar 28 '20 at 16:02
  • I have no idea how this happens.. – shakedzy Mar 28 '20 at 16:06
  • I will add that your graph looks mirrored for Class 1. Basically its doing the opposite of what it should be. If possible can you provide all the code, i know you say it is the same as the samples but maybe there is something else we are missing. – jawsem Mar 28 '20 at 16:12
  • Added the full code – shakedzy Mar 28 '20 at 16:15
  • It really is happening on my machine alone.. tested it on Colab and there's no difference.. WHAT THE..? – shakedzy Mar 28 '20 at 17:37

2 Answers2

1

They are not the same thing. Since SVM is not a probabilistic model, some strategies are proposed to compute the probability/scores of a sample belonging to a class. The decision_function is a parameter related to the distance of the sample to the hyperplane (link for the documentation), while predict_proba (link for the documentation) computes the probability using the Platt scaling, which is basically a logistic regression fitting using cross-validation on the training data.

For further information, check this user guide for the SVM of the Sklearn library. It is emphasized that:

The cross-validation involved in Platt scaling is an expensive operation for large datasets. In addition, the probability estimates may be inconsistent with the scores.

And it has some theoretical issues:

Platt’s method is also known to have theoretical issues. If confidence scores are required, but these do not have to be probabilities, then it is advisable to set probability=False and use decision_function instead of predict_proba.

  • @shakedzy can you check if OP is correct. Just asking as its a totally different answer than based on your posted answer. If so it deserves my upvote on this one. End of Review. – ZF007 Nov 22 '21 at 20:54
  • agree on this answer, the issue is likely to be related to what's pointed out. Also see [this](https://stackoverflow.com/questions/68475534/svm-model-predicts-instances-with-probability-scores-greater-than-0-1default-th/70049005#70049005) for a further reference. – amiola Nov 23 '21 at 20:00
0

God knows why, it has something to do with the random features generated using np.random.RandomState. the seed 0 makes this problem, other numbes don't..

shakedzy
  • 2,853
  • 5
  • 32
  • 62