With the following code:
model = Sequential()
num_features = data.shape[2]
num_samples = data.shape[1]
model.add(
LSTM(16, batch_input_shape=(None, num_samples, num_features), return_sequences=True, activation='tanh'))
model.add(PReLU())
model.add(Dropout(0.5))
model.add(LSTM(8, return_sequences=True, activation='tanh'))
model.add(Dropout(0.1))
model.add(PReLU())
model.add(Flatten())
model.add(Dense(1, activation='sigmoid'))
I'm trying to understand how can I add an attention mechanism before the first LSTM layer. I've found the following GitHub: keras-attention-mechanism by Philippe Rémy but couldn't figure out how exactly to use it with my code.
I would like to visualize the attention mechanism and see what are the features that the model focus on.
Any help would be appreciated, especially a code modification. Thanks :)