0

As a beginner in scikit-learn, and trying to classify the iris dataset, I'm having problems with adjusting the scoring metric from scoring='accuracy' to others like precision, recall, f1 etc., in the cross-validation step. Below is the full code sample (enough to start at # Test options and evaluation metric).

# Load libraries
import pandas
from pandas.plotting import scatter_matrix
import matplotlib.pyplot as plt
from sklearn import model_selection # for command model_selection.cross_val_score
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC



# Load dataset
url = "https://raw.githubusercontent.com/jbrownlee/Datasets/master/iris.csv"
names = ['sepal-length', 'sepal-width', 'petal-length', 'petal-width', 'class']
dataset = pandas.read_csv(url, names=names)


# Split-out validation dataset
array = dataset.values
X = array[:,0:4]
Y = array[:,4]
validation_size = 0.20
seed = 7
X_train, X_validation, Y_train, Y_validation = model_selection.train_test_split(X, Y, test_size=validation_size, random_state=seed)


# Test options and evaluation metric
seed = 7
scoring = 'accuracy'


#Below, we build and evaluate 6 different models
# Spot Check Algorithms
models = []
models.append(('LR', LogisticRegression()))
models.append(('LDA', LinearDiscriminantAnalysis()))
models.append(('KNN', KNeighborsClassifier()))
models.append(('CART', DecisionTreeClassifier()))
models.append(('NB', GaussianNB()))
models.append(('SVM', SVC()))


# evaluate each model in turn, we calculate the cv-scores, ther mean and std for each model
# 
results = []
names = []
for name, model in models:
    #below, we do k-fold cross-validation
    kfold = model_selection.KFold(n_splits=10, random_state=seed)
    cv_results = model_selection.cross_val_score(model, X_train, Y_train, cv=kfold, scoring=scoring)
    results.append(cv_results)
    names.append(name)
    msg = "%s: %f (%f)" % (name, cv_results.mean(), cv_results.std())
    print(msg)

Now, apart from scoring ='accuracy', I'd like to evaluate other performance metrics for this multiclass classification problem. But when I use, scoring='precision', it raises:

ValueError: Target is multiclass but average='binary'. Please choose another average setting.

My questions are:

1) I guess the above is happening because 'precision' and 'recall' are defined in scikit-learn only for binary classification-is that correct? If yes, then, which command(s) should replace scoring='accuracy' in the code above?

2) If I want to compute the confusion matrix, precision and recall for each fold while performing the k-fold cross validation, what commands should I type?

3) For the sake of experimentation, I tried scoring='balanced_accuracy', only to find:

ValueError: 'balanced_accuracy' is not a valid scoring value.

Why is this happening, when the model evaluation documentation (https://scikit-learn.org/stable/modules/model_evaluation.html) clearly says balanced_accuracy is a scoring method? I'm quite confused here, so an actual code to show how to evaluate other performance etrics would be appreciated! Thanks inn advance!!

desertnaut
  • 57,590
  • 26
  • 140
  • 166

1 Answers1

2

1) I guess the above is happening because 'precision' and 'recall' are defined in scikit-learn only for binary classification-is that correct?

No. Precision & recall are certainly valid for multi-class problems, too - see the docs for precision & recall.

If yes, then, which command(s) should replace scoring='accuracy' in the code above?

The problem arises because, as you can see from the documentation links I have provided above, the default setting for these metrics is for binary classification (average='binary'). In your case of multi-class classification, you need to specify which exact "version" of the particular metric you are interested in (there are more than one); have a look at the relevant page of the scikit-learn documentation, but some valid options for your scoring parameter could be:

'precision_macro'
'precision_micro'
'precision_weighted'
'recall_macro'
'recall_micro'
'recall_weighted'

The documentation link above contains even an example of using 'recall_macro' with the iris data - be sure to check it.

2) If I want to compute the confusion matrix, precision and recall for each fold while performing the k-fold cross validation, what commands should I type?

This is not exactly trivial, but you can see a way in my answer for Cross-validation metrics in scikit-learn for each data split

3) For the sake of experimentation, I tried scoring='balanced_accuracy', only to find:

   ValueError: 'balanced_accuracy' is not a valid scoring value.

This is because you are probably using an older version of scikit-learn. balanced_accuracy became available only in v0.20 - you can verify that it is not available in v0.18. Upgrade your scikit-learn to v0.20 and you should be fine.

desertnaut
  • 57,590
  • 26
  • 140
  • 166
  • Many thanks for your answer! Much appreciated. Yes I did try the code after replacing scoring='accuracy' by scoring='precision_macro' , 'f1_macro' etc. and they worked fine. But one quick question sice you're an experinced data scientist: it seems there're too many documentation of everything, including performance metrics, in sk-learn, and for a beginner at least, it's blinding. For me, there should be exactly one neat documentation for everything. So how do you determine which one to go to if you forget a command? I find the sk-learn documentation too confusing! Thanks again! – Noprogexprnce mathmtcn Feb 01 '19 at 16:11
  • 1
    @Noprogexprncemathmtcn You are very welcome. The documentation can be confusing at times, but simply googling, say, "scikit-learn precision" will get you safely started. You'll get it with experience... :) Keep in mind that for such a big & ambitious project such as scikit-learn, it is not always easy to have everything gathered neatly together... – desertnaut Feb 01 '19 at 16:20