Is there a way in Keras to cross-validate the early stopping metric being monitored EarlyStopping(monitor = 'val_acc', patience = 5)
? Before allowing training to proceed to the next epoch, could the model be cross-validated to get a more robust estimate of the test error? What I have found is that the early stopping metric, say the accuracy on a validation set, can suffer from high variance. Early-stopped models often do not perform nearly as well on unseen data, and I suspect this is because of the high variance associated with the validation set approach.
To minimize the variance in the early stopping metric, I would like to k-fold cross-validate the early stopping metric as the model trains from epoch i
to epoch i + 1
. I would like to take the model at epoch i
, divide the training data into 10 parts, learn on 9 parts, estimate the error on the remaining part, repeat so that all 10 parts have had a chance to be the validation set, and then proceed with training to epoch i + 1
with the full training data as usual. The average of the 10 error estimates will hopefully be a more robust metric that can be used for early stopping.
I have tried to write a custom metric function that includes k-fold cross-validation but I can't get it to work. Is there a way to cross-validate the early stopping metric being monitored, perhaps through a custom function inside the Keras model or a loop outside the Keras model?
Thanks!!