64

I have a multi output(200) binary classification model which I wrote in keras.

In this model I want to add additional metrics such as ROC and AUC but to my knowledge keras dosen't have in-built ROC and AUC metric functions.

I tried to import ROC, AUC functions from scikit-learn

from sklearn.metrics import roc_curve, auc
from keras.models import Sequential
from keras.layers import Dense
.
.
.
model.add(Dense(200, activation='relu'))
model.add(Dense(300, activation='relu'))
model.add(Dense(400, activation='relu'))
model.add(Dense(300, activation='relu'))
model.add(Dense(200,init='normal', activation='softmax')) #outputlayer

model.compile(loss='categorical_crossentropy', optimizer='adam',metrics=['accuracy','roc_curve','auc'])

but it's giving this error:

Exception: Invalid metric: roc_curve

How should I add ROC, AUC to keras?

desertnaut
  • 57,590
  • 26
  • 140
  • 166
Eka
  • 14,170
  • 38
  • 128
  • 212
  • Write your own AUC function and do model.predict - See [here](http://stackoverflow.com/a/41722962/5307226) – ahmedhosny Feb 28 '17 at 16:30
  • 1
    It is not clear from your post whether you want to compute the AUC separately for each of your outputs or not. – nbro Jan 30 '20 at 01:26

8 Answers8

67

Due to that you can't calculate ROC&AUC by mini-batches, you can only calculate it on the end of one epoch. There is a solution from jamartinh, I patch the code below for convenience:

from sklearn.metrics import roc_auc_score
from keras.callbacks import Callback
class RocCallback(Callback):
    def __init__(self,training_data,validation_data):
        self.x = training_data[0]
        self.y = training_data[1]
        self.x_val = validation_data[0]
        self.y_val = validation_data[1]


    def on_train_begin(self, logs={}):
        return

    def on_train_end(self, logs={}):
        return

    def on_epoch_begin(self, epoch, logs={}):
        return

    def on_epoch_end(self, epoch, logs={}):
        y_pred_train = self.model.predict_proba(self.x)
        roc_train = roc_auc_score(self.y, y_pred_train)
        y_pred_val = self.model.predict_proba(self.x_val)
        roc_val = roc_auc_score(self.y_val, y_pred_val)
        print('\rroc-auc_train: %s - roc-auc_val: %s' % (str(round(roc_train,4)),str(round(roc_val,4))),end=100*' '+'\n')
        return

    def on_batch_begin(self, batch, logs={}):
        return

    def on_batch_end(self, batch, logs={}):
        return

roc = RocCallback(training_data=(X_train, y_train),
                  validation_data=(X_test, y_test))

model.fit(X_train, y_train, 
          validation_data=(X_test, y_test),
          callbacks=[roc])

A more hackable way using tf.contrib.metrics.streaming_auc:

import numpy as np
import tensorflow as tf
from sklearn.metrics import roc_auc_score
from sklearn.datasets import make_classification
from keras.models import Sequential
from keras.layers import Dense
from keras.utils import np_utils
from keras.callbacks import Callback, EarlyStopping


# define roc_callback, inspired by https://github.com/keras-team/keras/issues/6050#issuecomment-329996505
def auc_roc(y_true, y_pred):
    # any tensorflow metric
    value, update_op = tf.contrib.metrics.streaming_auc(y_pred, y_true)

    # find all variables created for this metric
    metric_vars = [i for i in tf.local_variables() if 'auc_roc' in i.name.split('/')[1]]

    # Add metric variables to GLOBAL_VARIABLES collection.
    # They will be initialized for new session.
    for v in metric_vars:
        tf.add_to_collection(tf.GraphKeys.GLOBAL_VARIABLES, v)

    # force to update metric values
    with tf.control_dependencies([update_op]):
        value = tf.identity(value)
        return value

# generation a small dataset
N_all = 10000
N_tr = int(0.7 * N_all)
N_te = N_all - N_tr
X, y = make_classification(n_samples=N_all, n_features=20, n_classes=2)
y = np_utils.to_categorical(y, num_classes=2)

X_train, X_valid = X[:N_tr, :], X[N_tr:, :]
y_train, y_valid = y[:N_tr, :], y[N_tr:, :]

# model & train
model = Sequential()
model.add(Dense(2, activation="softmax", input_shape=(X.shape[1],)))

model.compile(loss='categorical_crossentropy',
              optimizer='adam',
              metrics=['accuracy', auc_roc])

my_callbacks = [EarlyStopping(monitor='auc_roc', patience=300, verbose=1, mode='max')]

model.fit(X, y,
          validation_split=0.3,
          shuffle=True,
          batch_size=32, nb_epoch=5, verbose=1,
          callbacks=my_callbacks)

# # or use independent valid set
# model.fit(X_train, y_train,
#           validation_data=(X_valid, y_valid),
#           batch_size=32, nb_epoch=5, verbose=1,
#           callbacks=my_callbacks)
starball
  • 20,030
  • 7
  • 43
  • 238
William
  • 4,258
  • 2
  • 23
  • 20
  • Is it possible to call roc_callback on a different validation set on each epoch, say by specifying a validation_split and shuffle=True inside the fit method and then passing the validation set to the roç callback? I'm not sure of the correct syntax to do that. Any help ? Thank you – Ahmed Besbes Feb 26 '18 at 11:02
  • @AhmedBesbes I have updated this answer. Now, It contarins a solution using `tf.contrib.metrics.streaming_auc`. You can use `validation_split ` and `shuffle `, and it runs more fast. – William May 15 '18 at 02:40
  • 2
    This should be the accepted solution. Using AUC as metric doesn't work because Keras calculates the AUC for each minibatch and average the results, such calculation is not valid for AUC (but it is for accuracy for example) – Guy s Jul 30 '19 at 11:49
41

Like you, I prefer using scikit-learn's built in methods to evaluate AUROC. I find that the best and easiest way to do this in keras is to create a custom metric. If tensorflow is your backend, implementing this can be done in very few lines of code:

import tensorflow as tf
from sklearn.metrics import roc_auc_score

def auroc(y_true, y_pred):
    return tf.py_func(roc_auc_score, (y_true, y_pred), tf.double)

# Build Model...

model.compile(loss='categorical_crossentropy', optimizer='adam',metrics=['accuracy', auroc])

Creating a custom Callback as mentioned in other answers will not work for your case since your model has multiple ouputs, but this will work. Additionally, this methods allows the metric to be evaluated on both training and validation data whereas a keras callback does not have access to the training data and can thus only be used to evaluate performance on the training data.

Kimball Hill
  • 427
  • 4
  • 3
  • 4
    After a few epochs, I get a: _ValueError: Only one class present in y_true. ROC AUC score is not defined in that case._ This probably happened during one of the batches. Using try&catch solved the problem but not exactly as suggested in this answer: https://stackoverflow.com/a/45139405/4548320 because I use TensorFlow as a backend the try and catch are not working. I had to define a new function auc2 place the try&catch inside it and send auc2 as argument to tf.py_func – Guy s Jul 16 '19 at 13:08
  • 1
    @Guy s one solution I have tried was from this link: https://stackoverflow.com/questions/45139163/roc-auc-score-only-one-class-present-in-y-true by Dmitry Konovalov. It worked for me – Ach113 Jul 28 '19 at 01:17
  • It's very similar to what I did, but you must return a value for Keras (cant use pass) – Guy s Jul 28 '19 at 08:11
  • @Guys Can you please tell me how did you fix the issue of ValueError. i.e Only one class present in y_true. Can you please share how you made your new function auc2? – user_12 Aug 21 '19 at 22:23
  • 2
    @user_12, actually I deleted the code. This solution should be avoided. This exception is just the symptom of a bigger problem: AUC should not be calculated on minibatches and averaged as Keras does. Rather, it should be calculated using a callback. Use this solution (found in this page): https://stackoverflow.com/a/46844409/4548320 – Guy s Aug 26 '19 at 12:12
  • @Guys What's would be the problem if we calculate it by minibatches? I increased the batch size to 400 and the issue was solved and also I was able to get scores as well. Is there any problem if we calculate it by minibatches? – user_12 Aug 28 '19 at 15:32
  • @user_12 Unlike accuracy, you must calculate AUC on the dataset at once, mathematically it is not equal to calculate minibatches and average the results. Perhaps your exception is not shown, but the AUC you are using is not correct. – Guy s Sep 01 '19 at 09:08
  • @Guys Can you provide any reference/links/more-info on why it's mathematically not equal to calculating by mini-batches and averaging the results? – user_12 Sep 01 '19 at 15:31
  • @user_12 you can use this code to see the difference: import numpy as np; from sklearn.metrics import roc_auc_score; y_true = np.array([0, 0, 1, 1]); y_scores = np.array([0.1,0.4, 0.35 , 0.8]); auc0 = roc_auc_score(y_true, y_scores); print('true auc:',auc0); y_true = np.array([0, 1]); y_scores = np.array([0.1, 0.8]); auc1=roc_auc_score(y_true, y_scores); y_true = np.array([0, 1]); y_scores = np.array([0.4, 0.35]); auc2=roc_auc_score(y_true, y_scores); print('averaged auc',(auc1+auc2)/2) – Guy s Sep 08 '19 at 08:22
23

The following solution worked for me:

import tensorflow as tf
from keras import backend as K

def auc(y_true, y_pred):
    auc = tf.metrics.auc(y_true, y_pred)[1]
    K.get_session().run(tf.local_variables_initializer())
    return auc

model.compile(loss="binary_crossentropy", optimizer='adam', metrics=[auc])
tuomastik
  • 4,559
  • 5
  • 36
  • 48
B. Kanani
  • 606
  • 5
  • 11
  • 3
    Just a note - If you are using `tensorflow.keras` instead of just `keras` you should of course do `from tensorflow.keras import backend as K`, otherwise you'll get errors because of the different versions. – tsveti_iko Feb 11 '19 at 10:14
  • 2
    Another note: Tensorflow's AUC is an approximation and differs from sklearn's result. https://github.com/tensorflow/tensorflow/issues/14834 – Flipper Mar 17 '19 at 20:43
  • 2
    Note that this solution will not give you accurate AUC, just an approximation as Keras averages the results of minibatches, it may also raise an unnecessary exception as a result. Perhaps this code is short, but you should really consider one of the other answers. – Guy s Sep 24 '19 at 10:38
15

I solved my problem this way

consider you have testing dataset x_test for features and y_test for its corresponding targets.

first we predict targets from feature using our trained model

 y_pred = model.predict_proba(x_test)

then from sklearn we import roc_auc_score function and then simple pass the original targets and predicted targets to the function.

 roc_auc_score(y_test, y_pred)
Eka
  • 14,170
  • 38
  • 128
  • 212
12

You can monitor auc during training by providing metrics the following way:

METRICS = [
      keras.metrics.TruePositives(name='tp'),
      keras.metrics.FalsePositives(name='fp'),
      keras.metrics.TrueNegatives(name='tn'),
      keras.metrics.FalseNegatives(name='fn'), 
      keras.metrics.BinaryAccuracy(name='accuracy'),
      keras.metrics.Precision(name='precision'),
      keras.metrics.Recall(name='recall'),
      keras.metrics.AUC(name='auc'),
]


model = keras.Sequential([
    keras.layers.Dense(16, activation='relu', input_shape=(train_features.shape[-1],)),
    keras.layers.Dense(1, activation='sigmoid'),
  ])

model.compile(
    optimizer=keras.optimizers.Adam(lr=1e-3)
    loss=keras.losses.BinaryCrossentropy(),
    metrics=METRICS)

for a more detailed tutorial see:
https://www.tensorflow.org/tutorials/structured_data/imbalanced_data

0-_-0
  • 1,313
  • 15
  • 15
6

'roc_curve','auc' are not standard metrics you can't pass them like that to metrics variable, this is not allowed. You can pass something like 'fmeasure' which is a standard metric.

Review the available metrics here: https://keras.io/metrics/ You may also want to have a look at making your own custom metric: https://keras.io/metrics/#custom-metrics

Also have a look at generate_results method mentioned in this blog for ROC, AUC... https://vkolachalama.blogspot.in/2016/05/keras-implementation-of-mlp-neural.html

sunil manikani
  • 320
  • 1
  • 11
1

Adding to above answers, I got the error "ValueError: bad input shape ...", so I specify the vector of probabilities as follows:

y_pred = model.predict_proba(x_test)[:,1]
auc = roc_auc_score(y_test, y_pred)
print(auc)
KarthikS
  • 883
  • 1
  • 11
  • 17
1

Set your model architecture with tf.keras.metrics.AUC(): Read the Keras documentation on Classification metrics based on True/False positives & negatives.

def model_architecture_ann(in_dim,lr=0.0001):
    model = Sequential()
    model.add(Dense(512, input_dim=X_train_filtered.shape[1], activation='relu'))
    model.add(Dense(1, activation='sigmoid'))
    opt = keras.optimizers.SGD(learning_rate=0.001)
    auc=tf.keras.metrics.AUC()
    model.compile(loss='binary_crossentropy', optimizer=opt, metrics=[tf.keras.metrics.AUC(name='auc')])    
    model.summary()
    return model
desertnaut
  • 57,590
  • 26
  • 140
  • 166
hp_elite
  • 158
  • 1
  • 6