I am training my network with early stopping strategy. I start with a higher learning rate, and based on validation loss, I need to restart training from an earlier snapshot.
I am able to save/load snapshot with model and optimizer state_dicts
. No problem with that.
My question is, once I restart training, how do I set the learning rate of adam again? Should I restart adam fresh instead of using a state_dict
or should I use
optimizer.param_groups[0][‘lr’] = lr
to adjust learning rate with loaded optimizer state_dict
?
For example,
I train my network with lr = 1e-6
for 5 epochs, saved model and optimizer state_dict
.
I am now restarting from epoch 6, but I need lr = 1e-7
instead. What is the best approach for this?
Thanks!