Is it possible to use custom metrics in the ModelCheckpoint
callback?
Asked
Active
Viewed 1.4k times
24

Fábio Perez
- 23,850
- 22
- 76
- 100
1 Answers
42
Yes, it is possible.
Define the custom metrics as described in the documentation:
import keras.backend as K
def mean_pred(y_true, y_pred):
return K.mean(y_pred)
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy', mean_pred])
To check all available metrics:
print(model.metrics_names)
> ['loss', 'acc', 'mean_pred']
Pass the metric name to ModelCheckpoint
through monitor
. If you want the metric calculated in the validation, use the val_
prefix.
ModelCheckpoint(weights.{epoch:02d}-{val_mean_pred:.2f}.hdf5,
monitor='val_mean_pred',
save_best_only=True,
save_weights_only=True,
mode='max',
period=1)
Don't use mode='auto'
for custom metrics. Understand why here.
Why am I answering my own question? Check this.

Fábio Perez
- 23,850
- 22
- 76
- 100
-
Link for understand why is broken. Got me curious – Pablo Werlang Sep 23 '19 at 17:26
-
2I got why. When you build a custom model ModelCheckpoint doesn't know if higher or lower value is better, so mode='auto' is bad. In my case, I defined F1 metric, so I needed to tell ModelCheckpoint that a higher value val_f1 is better, so mode='max'. – Pablo Werlang Sep 23 '19 at 17:33