I'm trying to implement a LSTM neural network using tensorflow to do keyword detection. I fed the neural network with sequences of 400ms. However, during training, I don't want the LSTM to remember the sequence 1 while trying to learn the sequence 6 for instance. So how can I reset the state of the LSTM during the training. Does the initial_state in argument of outputs, state = rnn.rnn(cell, inputs, initial_state=self._initial_state)
allows to reset the memory of the LSTM once the entire batch is fed?
I tried to understand the implementation with this link:
https://github.com/tensorflow/models/blob/master/tutorials/rnn/ptb/ptb_word_lm.py