0

I am developing a Deep Learning model where the first two layers are Bi-LSTMS. The output of the second Bi-LSTM should be the input of a 1D CNN. However, I am getting this error

ValueError: Input 0 of layer "conv1d" is incompatible with the layer: expected min_ndim=3, found ndim=2. Full shape received: (None, 128)

when the run reaches this part

    conv_1 = Conv1D(
        filters=hp.Int('conv_1_filter', min_value=32, max_value=128, step=16),
        kernel_size=hp.Choice('convolution_1', values = [2,6])
    )(bilstm_2)
    conv_1 = GlobalMaxPooling1D()(conv_1) 

Full Code:

def create_model(hp):
    inputs = Input(name='inputs',shape=[max_len])
    embedding_layer = Embedding(vocab_size, dimention, input_length=max_len)(inputs)
    bilstm_1 = Bidirectional(LSTM(128, return_sequences=True))(embedding_layer)
    dropout_1 = Dropout(0.5)(bilstm_1)
    bilstm_2 = Bidirectional(LSTM(64, return_sequences=False))(dropout_1)
    dropout_2 = Dropout(0.5)(bilstm_2)
    print(bilstm_2.shape)
    
    conv_1 = Conv1D(
        filters=hp.Int('conv_1_filter', min_value=32, max_value=128, step=16),
        kernel_size=hp.Choice('convolution_1', values = [2,6])
    )(dropout_2)
    conv_1 = GlobalMaxPooling1D()(conv_1)

I tried to change the input shape as mentioned here but I am still getting the same error


def create_model(hp):
    inputs = Input(name='inputs',shape=[max_len])
    embedding_layer = Embedding(vocab_size, dimention, input_length=max_len)(inputs)
    bilstm_1 = Bidirectional(LSTM(128, return_sequences=True))(embedding_layer)
    dropout_1 = Dropout(0.5)(bilstm_1)
    bilstm_2 = Bidirectional(LSTM(64, return_sequences=False))(dropout_1)
    dropout_2 = Dropout(0.5)(bilstm_2)

    series_input = Input(shape = (bilstm_2.shape[1],1,))
                         
    conv_1 = Conv1D(
        filters=hp.Int('conv_1_filter', min_value=32, max_value=128, step=16),
        kernel_size=hp.Choice('convolution_1', values = [2,6]),
        input_shape=[None,series_input]
    )(dropout_2)
    conv_1 = GlobalMaxPooling1D()(conv_1)

I also tried adding the parameter batch_input_shape=(None, 128,1) like mentioned here but it didn't work

And I tried adding a Reshape layer before conv_1 like mentioned here

Reshape((1,128))(droput_2)

Blue Cheese
  • 123
  • 1
  • 7

1 Answers1

1

Just change this: bilstm_2 = layers.Bidirectional(layers.LSTM(64, return_sequences=True))(dropout_1) Because in a Conv1D it expects 3D.

TheEngineerProgrammer
  • 1,282
  • 1
  • 4
  • 9