If you were to treat each 3 letter array as an input step, i.e:
step 1: [abc]
step 2: [bcd]
step 3: [cde]
The hidden state will propagate through each timestep, the hidden state is the same as the output, so you have nothing to worry about.
import tensorflow as tf
import numpy as np
sess = tf.InteractiveSession()
def lstm_cell(hidden_size):
return tf.contrib.rnn.BasicLSTMCell(num_units = hidden_size)
in_seqlen = 3
input_dim = 3
x = tf.placeholder("float", [None, in_seqlen, input_dim])
out, state = tf.nn.dynamic_rnn(lstm_cell(input_dim), x, dtype=tf.float32)
...
sess.run(tf.global_variables_initializer())
output, states = sess.run([out, state], feed_dict={x:[[[1,2,3],[2,3,4],[3,4,5]]]})
If instead you mean treating each one as a sequence, i.e:
step 1: a,x0
step 2: b,x0
step 3: c,x0
output: x1
step 1: b,x1
step 2: c,x1
step 3: d,x1
output: x2
etc...
Then you need to feed the last state as input to the session each time you run a session:
...
in_seqlen = 3
input_dim = 1
hidden_dim = input_dim
x = tf.placeholder(tf.float32, [None, in_seqlen, input_dim])
s = tf.placeholder(tf.float32, [2, None, hidden_dim])
state_tuple = tf.nn.rnn_cell.LSTMStateTuple(s[0], s[1])
out, state = tf.nn.dynamic_rnn(lstm_cell(hidden_dim), x, initial_state=state_tuple, dtype=tf.float32)
...
sess.run(tf.global_variables_initializer())
batch_size = 1
init_state = np.zeros((2, batch_size, hidden_dim))
output, states = sess.run([out, state], feed_dict={x:[[[1],[2],[3]]], s:init_state})
#feed state of previous run
output, states = sess.run([out, state], feed_dict={x:[[[1],[2],[3]]], s:states})
You'll need to add in target placeholder, loss etc.
useful:
TensorFlow: Remember LSTM state for next batch (stateful LSTM)
http://colah.github.io/posts/2015-08-Understanding-LSTMs/