5

For now I'm using early stopping in Keras like this:

X,y= load_data('train_data')
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=12)

datagen = ImageDataGenerator(
    horizontal_flip=True,
    vertical_flip=True)

early_stopping_callback = EarlyStopping(monitor='val_loss', patience=epochs_to_wait_for_improve)
history = model.fit_generator(datagen.flow(X_train, y_train, batch_size=batch_size),
            steps_per_epoch=len(X_train) / batch_size, validation_data=(X_test, y_test),
            epochs=n_epochs, callbacks=[early_stopping_callback])

But at the end of model.fit_generator it will save model after epochs_to_wait_for_improve, but I want to save model with min val_loss does it make sense and is it possible?

mrgloom
  • 20,061
  • 36
  • 171
  • 301
  • That is definitely possible. Just create your own checkpointer. Check the answers here: http://stackoverflow.com/questions/37293642/how-to-tell-keras-stop-training-based-on-loss-value – Wilmar van Ommeren May 18 '17 at 15:32
  • Did you take a look at this one http://stackoverflow.com/questions/37293642/how-to-tell-keras-stop-training-based-on-loss-value ? – orabis May 18 '17 at 15:32

1 Answers1

9

Yes, it's possible with one more callback, here is the code:

early_stopping_callback = EarlyStopping(monitor='val_loss', patience=epochs_to_wait_for_improve)
checkpoint_callback = ModelCheckpoint(model_name+'.h5', monitor='val_loss', verbose=1, save_best_only=True, mode='min')
history = model.fit_generator(datagen.flow(X_train, y_train, batch_size=batch_size),
            steps_per_epoch=len(X_train) / batch_size, validation_data=(X_test, y_test),
            epochs=n_epochs, callbacks=[early_stopping_callback, checkpoint_callback])
mrgloom
  • 20,061
  • 36
  • 171
  • 301