I have a built a LSTM architecture using Keras. My goal is to map length 29 time series input sequences of floats to length 29 output sequences of floats. I am trying to implement a "many-to-many" approach. I followed this post for implementing such a model.
I start by reshaping each data point into an np.array
of shape `(1, 29, 1). I have multiple data points and train the model on each one separately. The following code is how I build my model:
def build_model():
# define model
model = tf.keras.Sequential()
model.add(tf.keras.layers.LSTM(29, return_sequences=True, input_shape=(29, 1)))
model.add(tf.keras.layers.LeakyReLU(alpha=0.3))
model.compile(optimizer='sgd', loss='mse', metrics = ['mae'])
#cast data
for point in train_dict:
train_data = train_dict[point]
train_dataset = tf.data.Dataset.from_tensor_slices((
tf.cast(train_data[0], features_type),
tf.cast(train_data[1], target_type))
).repeat() #cast into X, Y
# fit model
model.fit(train_dataset, epochs=100,steps_per_epoch = 1,verbose=0)
print(model.summary())
return model
I am confused because when I call model.predict(test_point, steps = 1, verbose = 1)
the model returns 29 length 29 sequences! I don't understand why this is happening, based on my understanding from the linked post. When I try return_state=True
instead of return_sequences=True
then my code raises this error: ValueError: All layers in a Sequential model should have a single output tensor. For multi-output layers, use the functional API.
How do I solve the problem?