I'd like to expand on this question of when to reset states.
Stateful LSTM: When to reset states?
Suppose I train a stateful model as such:
for i in range(epochs):
model.fit(X_train, y_train, epochs=1, batch_size=1, shuffle=False)
model.reset_states()
My training and test sets are from one time-series data set, with the test set following immediately after the training set.
Next, I want to evaluate the test set and get an array of the predictions.
score = model.evaluate(X_test, y_test, batch_size=1, verbose=True)
prediction = model.predict(X_test, batch_size=1)
I feel as though resetting the model state at the end of the training loop will cause the evaluate or predict steps to be wrong, at least at the beginning of the set. Is that so? Should I not reset the state for the last epoch if the data continues sequentially into the test set?
Also, after I evaluate on the test set, do I need to restore the model's state to what it was at the end of the training set before I try to predict? Should I copy the model? Save and reload it?