In Tensorflow, I want to use some of the variables of my network from the previous training step in the next training step. More specifically, I want to calculate a secondary cost function during training which utilizes some network tensors from the previous training step.
This question could be answered with fragments of RNN code, but I didn't figure out how yet. I was looking into How can I feed last output y(t-1) as input for generating y(t) in tensorflow RNN? and Tensorflow: How to pass output from previous time-step as input to next timestep as well as TensorFlow: Remember LSTM state for next batch (stateful LSTM).
Assume h is the last layer of a neural network with several previous layers, e.g.:
h = tf.nn.relu(tf.matmul(h_previous,W_previous))
How could I preserve the tensor h after processing a sample during training (e.g. save it to h_old), so that I can use it in the next training step for a computation like:
d = tf.sub(h,h_old)
In this example h is updated with the current training sample and h_old is the tensor which was computed on the previous training sample. Some ideas for this issue would be great!