In this example from Keras there is a code like this:
def transformer_encoder(inputs, head_size, num_heads, ff_dim, dropout=0):
# Normalization and Attention
x = layers.LayerNormalization(epsilon=1e-6)(inputs)
x = layers.MultiHeadAttention(
key_dim=head_size, num_heads=num_heads, dropout=dropout
)(x, x)
x = layers.Dropout(dropout)(x)
res = x + inputs
What does the (x, x) mean on the sixth line?
Can I change the second x to be another input? (I am trying to implement this model here and I don't know how to add the additional data as query input, maybe changing the second x would be it?).