I am creating a seq2seq model on word level embeddings for text summarisation and I am facing data shapes issue please help. thanks.
encoder_input=Input(shape=(max_encoder_seq_length,))
embed_layer=Embedding(num_encoder_tokens,256,mask_zero=True)(encoder_input)
encoder=LSTM(256,return_state=True,return_sequences=False)
encoder_ouput,state_h,state_c=encoder(embed_layer)
encoder_state=[state_h,state_c]
decoder_input=Input(shape=(max_decoder_seq_length,))
de_embed=Embedding(num_decoder_tokens,256)(decoder_input)
decoder=LSTM(256,return_state=True,return_sequences=True)
decoder_output,_,_=decoder(de_embed,initial_state=encoder_state)
decoder_dense=Dense(num_decoder_tokens,activation='softmax')
decoder_output=decoder_dense(decoder_output)
model=Model([encoder_input,decoder_input],decoder_output)
model.compile(optimizer='adam',loss="categorical_crossentropy",metrics=['accuracy'])
it gives error when training due to the shape of input. Please help in re shaping my data as current shape is
encoder Data shape: (50, 1966, 7059) decoder Data shape: (50, 69, 1183) decoder target shape: (50, 69, 1183)
Epoch 1/35
WARNING:tensorflow:Model was constructed with shape (None, 1966) for input Tensor("input_37:0", shape=(None, 1966), dtype=float32), but it was called on an input with incompatible shape (None, 1966, 7059).
WARNING:tensorflow:Model was constructed with shape (None, 69) for input Tensor("input_38:0", shape=(None, 69), dtype=float32), but it was called on an input with incompatible shape (None, 69, 1183).
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-71-d02252f12e7f> in <module>()
1 model.fit([encoder_input_data, decoder_input_data], decoder_target_data,
2 batch_size=16,
----> 3 epochs=35)
ValueError: Input 0 of layer lstm_35 is incompatible with the layer: expected ndim=3, found ndim=4. Full shape received: [None, 1966, 7059, 256]