3

Solution suggested here did not solve my problem this question.

I'm trying to use callbacks ModelCheckpoint and EarlyStopping to save the best weights when early stopping. After the first epoch I get a runtime warning but the codes runs the remaining epochs without errors, but still no file containing weight appears. The warning after the first epoch is the following:

RuntimeWarning: Can save best model only with val_acc available, skipping. 'skipping.' % (self.monitor), RuntimeWarning) RuntimeWarning: Early stopping conditioned on metric val_acc which is not available. Available metrics are: val_loss,val_accuracy,loss,accuracy (self.monitor, ','.join(list(logs.keys()))), RuntimeWarning

I have validation data added to the fit() function so I'm not sure why.

filepath = "weights_best.hdf5"

model.compile(loss="mean_squared_error",
                  metrics=['accuracy'],
                  optimizer=optimizer)
batchSize = 64 
numEpochs = 75
validation_data = (data.x_valid, data.y_valid)

callbackCheckpoint = keras.callbacks.callbacks.ModelCheckpoint(filepath,
                                                                   monitor='val_acc',
                                                                   save_best_only=True,
                                                                   save_weights_only= True,
                                                                   mode='max')

callbackEarlyStop = keras.callbacks.callbacks.EarlyStopping(monitor='val_acc',
                                                                min_delta=0,
                                                                patience=7,
                                                                verbose=0,
                                                                mode='auto')
callbacks = [callbackCheckpoint, callbackEarlyStop]
model.fit(data.x_train, data.y_train, batchSize, numEpochs, callbacks=callbacks,
              validation_data=validation_data)

Any help would be greatly appreciated!

Sachith Muhandiram
  • 2,819
  • 10
  • 45
  • 94

2 Answers2

6

Change monitor='val_acc' to monitor='val_accuracy' or metrics=['accuracy'] to metrics=['acc']

Natthaphon Hongcharoen
  • 2,244
  • 1
  • 9
  • 23
0

Yes monitor 'val_accuracy', Just a comment, monitoring and saving the weights based on the highest val_accuracy may not give you the "best" answer. Monitoring 'val_loss' is usually a better answer. val_loss and val_accuracy are calculated in entirely different ways. It is not uncommon to see the val_loss increase but have val_accuracy increase as well or vice versa. In the end you want to use a model with the lowest validation loss. If you do do that make sure to change mode='min' or mode='auto in the checkpoint callback.

Gerry P
  • 7,662
  • 3
  • 10
  • 20