0

Lets say i have the following two sets of categories and a variable containing the target names:

spam = ["blue", "white", "blue", "yellow", "red"]
flagged = ["blue", "white", "yellow", "blue", "red"]
target_names = ["blue", "white", "yellow", "red"]

When i use the confusion_matrix function as following, this is the result:

from sklearn.metrics import confusion_matrix
confusion_matrix(spam, flagged, labels=target_names)

[[1 0 1 0]
 [0 1 0 0]
 [1 0 0 0]
 [0 0 0 1]]

However, when i give the parameter labels the information that i only want the metrics from 'blue', i get this result:

confusion_matrix(spam, flagged, labels=["blue"])

array([[1]])

With only one number i cannot calculate accuracy, precision, recall etc. What am i doing wrong here? filling in yellow, white or blue will result into a 0, 1 and 1.

desertnaut
  • 57,590
  • 26
  • 140
  • 166
intStdu
  • 291
  • 1
  • 11

1 Answers1

1

However, when i give the parameter labels the information that i only want the metrics from 'blue'

It doesn't work like that.

In multi-class settings such as yours, precision & recall are computed per class from the whole confusion matrix.

I have explained in detail the rationale and the calculations in another answer; here is how it would apply to your case for your own confusion matrix cm:

import numpy as np

# your comfusion matrix:
cm =np.array([[1, 0, 1, 0],
              [0, 1, 0, 0],
              [1, 0, 0, 0],
              [0, 0, 0, 1]])

# true positives:
TP = np.diag(cm)
TP
# array([1, 1, 0, 1])

# false positives:
FP = np.sum(cm, axis=0) - TP
FP 
# array([1, 0, 1, 0])

# false negatives
FN = np.sum(cm, axis=1) - TP
FN
# array([1, 0, 1, 0])

Now, from the definition of precision & recall, we have:

precision = TP/(TP+FP)
recall = TP/(TP+FN)

which, for your example, give:

precision
# array([ 0.5,  1. ,  0. ,  1. ])

recall
# array([ 0.5,  1. ,  0. ,  1. ])

i.e. for your 'blue' class, you get 50% precision & recall.

The fact that precision & recall here happen to be identical is purely coincidental, due to the fact that the FP & FN arrays happen to be identical; try with different predictions to get a feeling...

desertnaut
  • 57,590
  • 26
  • 140
  • 166
  • wow okay i get it now, great explanation! One more question, how could i get the True negatives from the matrix? – intStdu Jan 10 '19 at 14:19
  • 1
    @intStdu I had started writing it, but I removed it as they are not necessary for computing precision & recall; see the linked answer (and upvotes there are welcome, too ;) – desertnaut Jan 10 '19 at 14:21