2

I have a custom callback that shows me the number of false and true positives on epoch end. I'd like to use ModelCheckpoint to save the model with the max true-minus-false positives number. I've tried the following code but it doesn't seem to work:

RuntimeWarning: Can save best model only with tpfp available, skipping.

Does anyone know how this can be done?
Thank you kindly

class tpfp(keras.callbacks.Callback):
    def on_epoch_end(self,epoch,logs={}):
        x_test=self.validation_data[0]
        y_test=self.validation_data[1]
        y_pred=self.model.predict(x_test,verbose=0)
        y_pred[y_pred>.6]=1  #change threshold here
        y_pred[y_pred<1] = 0
        cm=metrics.confusion_matrix(y_test,y_pred)
        fp=cm[0,1]
        tp=cm[1,1]
        print(f'fp{fp}, tp{tp}')
        return(tp-fp)

mc = keras.callbacks.ModelCheckpoint('model.h5',monitor=tpfp(),mode='max',
                                     save_best_only=True,verbose=1)


model.fit(x_train, y_train, epochs=500, batch_size=100,
          validation_data=(x_test, y_test), callbacks=[tpfp(),mc],
          shuffle=True, verbose=2)
dmigo
  • 2,849
  • 4
  • 41
  • 62
bob
  • 23
  • 2

1 Answers1

1

Works for TF < 2.0.0.

You cannot pass a callback as a parameter for the monitor argument.

The elegant/natural solution to your problem is to modify/add some lines of code in the method @on_epoch_end.

def on_epoch_end(self,epoch,logs={}):
        x_test=self.validation_data[0]
        y_test=self.validation_data[1]
        y_pred=self.model.predict(x_test,verbose=0)
        y_pred[y_pred>.6]=1  #change threshold here
        y_pred[y_pred<1] = 0
        cm=metrics.confusion_matrix(y_test,y_pred)
        fp=cm[0,1]
        tp=cm[1,1]
        print(f'fp{fp}, tp{tp}')
        my_custom_value = tp - fp
        logs['my_custom_metric'] = my_custom_value
        return(tp-fp)

Now in your main:

mc = keras.callbacks.ModelCheckpoint('model.h5',monitor='my_custom_metric',mode='max',
                                     save_best_only=True,verbose=1)

By putting in the 'logs' dictionary at the end of your epoch, the monitor value is able to access the value of your 'my_custom_metric'.

For TF > 2.0.0, you can check the answer I provided here:

How to get other metrics in Tensorflow 2.0 (not only accuracy)?

Timbus Calin
  • 13,809
  • 5
  • 41
  • 59
  • Didn't work it tf 2.4.1. Had to set `self._supports_tf_logs = True` _after_ `super().__init__()` in my callback. The other alternative is to set `_supports_tf_logs = False` on the `ModelCheckpoint` instance. For details see `keras.callbacks.CallbackList.on_epoch_end`. – Adam May 05 '21 at 07:08
  • Yes, the codebase changed significantly since October 2019. Here is an updated answer which should help you: https://stackoverflow.com/questions/60616842/how-to-get-other-metrics-in-tensorflow-2-0-not-only-accuracy/60800425#60800425 – Timbus Calin May 05 '21 at 07:18
  • Thanks for the observation, I updated the answer so that everybody can see. – Timbus Calin May 05 '21 at 07:20