I'm getting different AUROC depending on when I calculate it. My code is
def auc_roc(y_true, y_pred):
# any tensorflow metric
value, update_op = tf.metrics.auc(y_true, y_pred)
return update_op
model.compile(loss='binary_crossentropy', optimizer=optim, metrics=['accuracy', auc_roc])
my_callbacks = [roc_callback(training_data=(x_train, y_train),validation_data=(x_test,y_test))]
model.fit(x_train, y_train, validation_data=(x_test, y_test), callbacks=my_callbacks)
Where roc_callback
is a Keras callback that calculates the AUROC at the end of each epoch using roc_auc_score
from sklearn. I use the code that is defined here.
When I train the model, I get the following statistics:
Train on 38470 samples, validate on 9618 samples
Epoch 1/15
38470/38470 [==============================] - auc_roc: 0.5116 - val_loss: 0.6899 - val_acc: 0.6274 - val_auc_roc: 0.5440
roc-auc_val: 0.5973
Epoch 2/15
38470/38470 [==============================] - auc_roc: 0.5777 - val_loss: 0.6284 - val_acc: 0.6870 - val_auc_roc: 0.6027
roc-auc_val: 0.6391
.
.
.
.
.
.
.
Epoch 12/15
38470/38470 [==============================] - auc_roc: 0.8754 - val_loss: 0.9569 - val_acc: 0.7747 - val_auc_roc: 0.8779
roc-auc_val: 0.6369
So how is the AUROC calculated during training going up with each epoch? Why is it different from the one calculated at the epoch end?