9

I want to design a single layer RNN in Tensorflow such that last output (y(t-1)) is participated in updating the hidden state.

h(t) = tanh(W_{ih} * x(t) + W_{hh} * h(t) + **W_{oh}y(t - 1)**)
y(t) = W_{ho}*h(t)

How can I feed last input y(t - 1) as input for updating the hidden state?

Ishant Mrinal
  • 4,898
  • 3
  • 29
  • 47
Poorya Pzm
  • 2,123
  • 3
  • 12
  • 9
  • Currently, I am looking into this tutorial which seems promising: https://github.com/ematvey/tensorflow-seq2seq-tutorials/blob/master/2-seq2seq-advanced.ipynb – George Pligoropoulos Jul 05 '17 at 10:38
  • Related post: https://stackoverflow.com/questions/39681026/tensorflow-how-to-pass-output-from-previous-time-step-as-input-to-next-timestep/49274234#49274234 – kafman Mar 21 '18 at 14:38

3 Answers3

2

Is y(t-1) the last input or output? In both cases it is not a straight fit with the TensorFlow RNN cell abstraction. If your RNN is simple you can just write the loop on your own, then you have full control. Another way that I would use is to pre-process your RNN input, e.g., do something like:

processed_input[t] = tf.concat(input[t], input[t-1])

Then call the RNN cell with processed_input and split there.

  • 2
    It uses the last generated output by itself. How can I write a loop for a simple RNN? In the way that the behind-the-scene optimizations be ok. – Poorya Pzm Feb 03 '16 at 21:32
0

One possibility is to use tf.nn.raw_rnn which I found in this article. Check my answer to this related post.

kafman
  • 2,862
  • 1
  • 29
  • 51
0

I would call what you described an "autoregressive RNN". Here's an (incomplete) code snippet that shows how you can create one using tf.nn.raw_rnn:

import tensorflow as tf

LSTM_SIZE = 128
BATCH_SIZE = 64
HORIZON = 10

lstm_cell = tf.nn.rnn_cell.LSTMCell(LSTM_SIZE, use_peepholes=True)


class RnnLoop:
    def __init__(self, initial_state, cell):
        self.initial_state = initial_state
        self.cell = cell

    def __call__(self, time, cell_output, cell_state, loop_state):
        emit_output = cell_output  # == None for time == 0
        if cell_output is None:  # time == 0
            initial_input = tf.fill([BATCH_SIZE, LSTM_SIZE], 0.0)
            next_input = initial_input
            next_cell_state = self.initial_state
        else:
            next_input = cell_output
            next_cell_state = cell_state

        elements_finished = (time >= HORIZON)
        next_loop_state = None
        return elements_finished, next_input, next_cell_state, emit_output, next_loop_state


rnn_loop = RnnLoop(initial_state=initial_state_tensor, cell=lstm_cell)
rnn_outputs_tensor_array, _, _ = tf.nn.raw_rnn(lstm_cell, rnn_loop)
rnn_outputs_tensor = rnn_outputs_tensor_array.stack()

Here we initialize internal state of LSTM with some vector initial_state_tensor, and feed zero array as input at t=0. After that, the output of the current timestep is the input for the next timestep.

Aleksei Petrenko
  • 6,698
  • 10
  • 53
  • 87