21

I am building a prediction model for the sequence data using conv1d layer provided by Keras. This is how I did

model= Sequential()
model.add(Conv1D(60,32, strides=1, activation='relu',padding='causal',input_shape=(None,64,1)))
model.add(Conv1D(80,10, strides=1, activation='relu',padding='causal'))
model.add(Dropout(0.25))
model.add(Conv1D(100,5, strides=1, activation='relu',padding='causal'))
model.add(MaxPooling1D(1))
model.add(Dropout(0.25))
model.add(Dense(300,activation='relu'))
model.add(Dense(1,activation='relu'))
print(model.summary())

However, the debugging information has

Traceback (most recent call last):
File "processing_2a_1.py", line 96, in <module>
model.add(Conv1D(60,32, strides=1, activation='relu',padding='causal',input_shape=(None,64,1)))
File "build/bdist.linux-x86_64/egg/keras/models.py", line 442, in add
File "build/bdist.linux-x86_64/egg/keras/engine/topology.py", line 558, in __call__
File "build/bdist.linux-x86_64/egg/keras/engine/topology.py", line 457, in assert_input_compatibility
ValueError: Input 0 is incompatible with layer conv1d_1: expected ndim=3, found ndim=4

The training data and validation data shape are as follows

('X_train shape ', (1496000, 64, 1))
('Y_train shape ', (1496000, 1))
('X_val shape ', (374000, 64, 1))
('Y_val shape ', (374000, 1))

I think the input_shape in the first layer was not setup right. How to set it up?


Update: After using input_shape=(64,1), I got the following error message, even though the model summary runs through

________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
conv1d_1 (Conv1D)            (None, 64, 60)            1980
_________________________________________________________________
conv1d_2 (Conv1D)            (None, 64, 80)            48080
_________________________________________________________________
dropout_1 (Dropout)          (None, 64, 80)            0
_________________________________________________________________
conv1d_3 (Conv1D)            (None, 64, 100)           40100
_________________________________________________________________
max_pooling1d_1 (MaxPooling1 (None, 64, 100)           0
_________________________________________________________________
dropout_2 (Dropout)          (None, 64, 100)           0
_________________________________________________________________
dense_1 (Dense)              (None, 64, 300)           30300
_________________________________________________________________
dense_2 (Dense)              (None, 64, 1)             301
=================================================================
Total params: 120,761
Trainable params: 120,761
Non-trainable params: 0
_________________________________________________________________
None
Traceback (most recent call last):
  File "processing_2a_1.py", line 125, in <module>
    history=model.fit(X_train, Y_train, batch_size=batch_size, validation_data=(X_val,Y_val), epochs=nr_of_epochs,verbose=2)
  File "build/bdist.linux-x86_64/egg/keras/models.py", line 871, in fit
  File "build/bdist.linux-x86_64/egg/keras/engine/training.py", line 1524, in fit
  File "build/bdist.linux-x86_64/egg/keras/engine/training.py", line 1382, in _standardize_user_data
  File "build/bdist.linux-x86_64/egg/keras/engine/training.py", line 132, in _standardize_input_data
ValueError: Error when checking target: expected dense_2 to have 3 dimensions, but got array with shape (1496000, 1)
Maxim
  • 52,561
  • 27
  • 155
  • 209
user288609
  • 12,465
  • 26
  • 85
  • 127

3 Answers3

13

You should either change input_shape to

input_shape=(64,1)

... or use batch_input_shape:

batch_input_shape=(None, 64, 1)

This discussion explains the difference between the two in keras in detail.

Maxim
  • 52,561
  • 27
  • 155
  • 209
  • Hi Maxim, I changed it to (64,1) as suggested, but got the error message, please see my edited post above. The batch_input_shape has the same error message. Thanks. – user288609 Apr 15 '18 at 12:40
  • Yes. What do you want to compare? You probably need to do more max-pooling to downsample the tensor from `64` to `1` in the end. Note that `MaxPooling1D(1)` doesn't do anything. – Maxim Apr 15 '18 at 12:46
  • 1
    I add a model.add(flatten()) between the last dropout layer and the first dense layer, it works. – user288609 Apr 15 '18 at 14:09
5

I had the same issue. I found expanding the dimensions of the input data fixed it using tf.expand_dims

x = expand_dims(x, axis=-1)
Frightera
  • 4,773
  • 2
  • 13
  • 28
Dakota Even
  • 51
  • 1
  • 1
0

I my case I wanted to use Conv2D on a single 20*32 feature map, and did:

print(kws_x_train.shape)                     # (8000,20,32)
model = tf.keras.models.Sequential([
  tf.keras.layers.Conv2D(16, (3, 8), input_shape=(20,32)),
])
...
model.fit(kws_x_train, kws_y_train, epochs=15)

which gives the expected ndim=4, found ndim=3. Full shape received: [None, 20, 32]. However you need to tell Conv2D that there is only 1 feature map, and add an extra dimension to the input vector. This worked:

kws_x_train2 = kws_x_train.reshape(kws_x_train.shape + (1,))
print(kws_x_train2.shape)                     # (8000,20,32,1)
model = tf.keras.models.Sequential([
  tf.keras.layers.Conv2D(16, (3, 8), input_shape=(20,32,1)),
])
...
model.fit(kws_x_train2, kws_y_train, epochs=15)
Axel Bregnsbo
  • 3,773
  • 2
  • 22
  • 16