1

I'm trying to learn rnn forecasting. I'm starting by feeding back the outputs in this example. The example below sets the inputs and weights, calculates outputs, splits the data into train / test, fits the model with the train, and then predicts the test. The test predictions are incorrect because the state was not saved / set. How to save the rnn state, and set for predicting. Also, how to add additional hidden states?

import numpy as np
import pandas as pd
import recurrentshop

from keras.layers import Dense, Concatenate, Input
from keras.models import Model

def main():
input_data = pd.DataFrame(data={
    'A': [1, 1, -19, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 4],
    'B': [1, 1, -18, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 4]
})

depth = input_data.shape[1]

x_t = Input(shape=(depth,))  # The input to the RNN at time t
y_tm1 = Input(shape=(depth,))  # Previous output
h_tm1 = Input(shape=(depth,))

# Compute new hidden state
x_t_and_y_tm1 = Concatenate()([x_t, y_tm1])
h_t = Dense(depth, kernel_initializer='ones')(x_t_and_y_tm1)

# Build the RNN
rnn = recurrentshop.RecurrentModel(input=x_t, initial_states=[h_tm1], output=h_t, final_states=[h_t],
                                   readout_input=y_tm1, return_sequences=True)

# Build a Keras Model using our RNN layer
n_time_steps = input_data.shape[0]
x = Input(shape=(None, depth))
y = rnn(x)
model = Model(x, y)

# Run the RNN over a sequence
expanded_input_data = np.expand_dims(input_data, axis=0)
out = model.predict(expanded_input_data)

print(f"expanded_input_data: {expanded_input_data}")
print(f"out: {out}")

# now train over in/out
n_training = int(0.8*n_time_steps)

training_in, testing_in = expanded_input_data[:, :n_training, :], expanded_input_data[:, n_training:, :]
training_out, testing_out = out[:, :n_training, :], out[:, n_training:, :]

# reinitialize weights

for layer in model.layers:
    if hasattr(layer, 'kernel_initializer'):
        layer.kernel.initializer.run()
    if hasattr(layer, 'bias_initializer'):
        layer.bias.initializer.run()

model.compile(loss='mse', optimizer='adam')
model.fit(training_in, training_out, validation_data=(training_in, training_out))
predict_out = model.predict(testing_in)

print(f"testing_out: {testing_out}")
print(f"predict_out: {predict_out}")

if __name__ == '__main__':
main()

The expected output is

testing_out: [[[-9.6468736e+07 -9.6468736e+07]
  [-1.9293747e+08 -1.9293747e+08]
  [-3.8587494e+08 -3.8587494e+08]
  [-7.7174989e+08 -7.7174989e+08]
  [-1.5434998e+09 -1.5434998e+09]
  [-3.0869996e+09 -3.0869996e+09]]]

The actual output is

predict_out: [[[  2.   2.]
  [  6.   6.]
  [ 14.  14.]
  [ 30.  30.]
  [ 62.  62.]
  [132. 132.]]]
John
  • 525
  • 5
  • 16
  • I don't understand what and where "additional hidden states" you want to add, but the first question is answered here: https://stackoverflow.com/questions/37969065/tensorflow-best-way-to-save-state-in-rnns – iga Nov 14 '18 at 19:14

0 Answers0