I am trying to implement RNN in Tensorflow for text prediction. I am using BasicLSTMCell for this purpose, with sequence lengths of 100.
If I understand correctly, the output activation h_t
and the activation c_t
of the LSTM are reset each time we enter a new sequence (that is, they are updated 100 times along the sequence, but once we move to the next sequence in the batch, they are reset to 0).
Is there a way to prevent this from happening using Tensorflow? That is, to continue using the updated c_t
and h_t
along all the sequences in the batch? (and then to reset them when moving to the next batch).