I'm new with Pytorch and machine learning I'm follow this tutorial in this tutorial https://www.learnopencv.com/image-classification-using-transfer-learning-in-pytorch/ and use my custom dataset. Then I have same problem in this tutorial but I dont know how to make early stopping in pytorch and if do you have better without create early stopping process please tell me.
-
Is there still no early stopping natively in Pytorch? – MJimitater Jun 26 '21 at 16:16
3 Answers
This is what I did in each epoch
val_loss += loss
val_loss = val_loss / len(trainloader)
if val_loss < min_val_loss:
#Saving the model
if min_loss > loss.item():
min_loss = loss.item()
best_model = copy.deepcopy(loaded_model.state_dict())
print('Min loss %0.2f' % min_loss)
epochs_no_improve = 0
min_val_loss = val_loss
else:
epochs_no_improve += 1
# Check early stopping condition
if epochs_no_improve == n_epochs_stop:
print('Early stopping!' )
loaded_model.load_state_dict(best_model)
Donno how correct it is (I took most parts of this code from a post on another website, but forgot where, so I can't put the reference link. I have just modified it a bit), hope you find it useful, in case I'm wrong, kindly point out the mistake. Thank you

- 31
- 2
Try with below code.
# Check early stopping condition
if epochs_no_improve == n_epochs_stop:
print('Early stopping!' )
early_stop = True
break
else:
continue
break
if early_stop:
print("Stopped")
break

- 464
- 5
- 15
The idea of early stopping is to avoid overfitting by stopping the training process if there is no sign of improvement upon a monitored quantity, e.g. validation loss stops decreasing after a few iterations. A minimal implementation of early stopping needs 3 components:
best_score
variable to store the best value of validation losscounter
variable to keep track of the number of iteration runningpatience
variable defines the number of epochs allows to continue training without improvement upon the validation loss. If thecounter
exceeds this, we stop the training process.
A pseudocode looks like this
# Define best_score, counter, and patience for early stopping:
best_score = None
counter = 0
patience = 10
path = ./checkpoints # user_defined path to save model
# Training loop:
for epoch in range(num_epochs):
# Compute training loss
loss = model(features,labels,train_mask)
# Compute validation loss
val_loss = evaluate(model, features, labels, val_mask)
if best_score is None:
best_score = val_loss
else:
# Check if val_loss improves or not.
if val_loss < best_score:
# val_loss improves, we update the latest best_score,
# and save the current model
best_score = val_loss
torch.save({'state_dict':model.state_dict()}, path)
else:
# val_loss does not improve, we increase the counter,
# stop training if it exceeds the amount of patience
counter += 1
if counter >= patience:
break
# Load best model
print('loading model before testing.')
model_checkpoint = torch.load(path)
model.load_state_dict(model_checkpoint['state_dict'])
acc = evaluate_test(model, features, labels, test_mask)
I've implemented an generic early stopping class for Pytorch to use with my some of projects. It allows you to select any validation quantity of interest (loss, accuracy, etc.). If you prefer a fancier early stopping then feel free to check it out in the repo early-stopping. There's an example notebook for reference too

- 33
- 1
- 5