I have a feed forward neural network and I want to train it with minibatches. The training code is as follows:
for epoch in range(epochs):
for x_batch, y_batch in training_data:
model.train()
optimizer.zero_grad()
output = model(x_batch)
loss = loss_fn(output, y_batch)
loss.backward()
optimizer.step()
model.eval()
with torch.no_grad():
output_test = model(x_test)
loss_t = loss_fn(output_test, y_test)
I am wondering if it is necessary to use model.train()
and model.eval()
while the model does not have any dropout?