I have:
self.model.add(Bidirectional(LSTM(lstm1_size, input_shape=(
seq_length, feature_dim), return_sequences=True)))
self.model.add(BatchNormalization())
self.model.add(Dropout(0.2))
self.model.add(Bidirectional(
LSTM(lstm2_size, return_sequences=True)))
self.model.add(BatchNormalization())
self.model.add(Dropout(0.2))
# BOTTLENECK HERE
self.model.add(Bidirectional(
LSTM(lstm3_size, return_sequences=True)))
self.model.add(BatchNormalization())
self.model.add(Dropout(0.2))
self.model.add(Bidirectional(
LSTM(lstm4_size, return_sequences=True)))
self.model.add(BatchNormalization())
self.model.add(Dropout(0.2))
self.model.add(Dense(feature_dim, activation='linear'))
However, I want to set up an autoencoder
-like setup, without having to have 2 separate models. Where I have the comment BOTTLENECK HERE
, I want to have a vector of some dimension, say bottleneck_dim
.
After that, it should be some LSTM layers that then reconstruct a sequence, of the same dimensions as the initial input. However, I believe that adding a Dense
layer will not return one vector, but instead return vectors for each of the sequence-length?