0

I would like to obtain the True positive rate (TPR) and the True Negative Rate (TNR) in the model.compile() statement as one of the evaluation metrics.

I have tried using the following code:

model.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ["accuracy", "tpr", "tnr"])

However I get an error saying:

Unknown metric function: tpr

I believe that they are both known metrics in keras so I don't understand this error. Please assist

desertnaut
  • 57,590
  • 26
  • 140
  • 166
Kusi
  • 785
  • 1
  • 10
  • 21
  • What do you mean "I believe"? They are clearly *not*: https://keras.io/metrics/ – desertnaut Feb 02 '19 at 13:39
  • Sorry my mistake, I saw this github thread https://github.com/gagneurlab/concise/issues/5 and thought it was possible. I used ConfusionMatrix from the panda_ml library which is very similar to your solution. Thank you very much. – Kusi Feb 02 '19 at 15:42
  • You are very welcome – desertnaut Feb 02 '19 at 15:43

1 Answers1

1

As is clear from the relevant Keras docs, tpr & tnr are not part of the native Keras metrics; there is a relevant Github thread, but the issue is still open.

But for the binary case you seem to work on, it is straightforward to get the required quantities from scikit-learn (you'll need to convert the model outcomes to binary labels, i.e. not probabilities); adapting the example from the docs:

from sklearn.metrics import confusion_matrix
y_true = [0, 1, 0, 1]
y_pred = [1, 1, 1, 0]
cm = confusion_matrix(y_true, y_pred) # careful with the order of arguments!
tn, fp, fn, tp = cm.ravel()
(tn, fp, fn, tp)
# (0, 2, 1, 1)

Having obtained these quantities, it is now straightforward to compute TPR & TNR (see the definitions in Wikipedia):

TPR = tp/(tp+fn)
TPR
# 0.5

TNR = tn/(tn+fp)
TNR
# 0.0

The multi-class case is a bit more complicated - see my answer in How to get precision, recall and f-measure from confusion matrix in Python.

desertnaut
  • 57,590
  • 26
  • 140
  • 166