0

I'm trying to identify the sequence of images. I've 2 images and I need to identify the 3rd one. All are color images.

I'm getting below error:

ValueError: Error when checking input: expected time_distributed_1_input to have 5 dimensions, but got array with shape (32, 128, 128, 6)

This is my layer:

batch_size = 32
height = 128
width = 128
model = Sequential()
model.add(TimeDistributed(Conv2D(32, (3, 3), activation = 'relu'), input_shape=(batch_size, height, width, 2 * 3)))
model.add(TimeDistributed(MaxPooling2D(2, 2)))
model.add(TimeDistributed(BatchNormalization()))
model.add(TimeDistributed(Conv2D(32, (3, 3), activation='relu', padding='same')))
model.add(Dropout(0.3))
model.add(Flatten())
model.add(LSTM(256, return_sequences=True, dropout=0.5))
model.add(Conv2D(3, (3, 3), activation='relu', padding='same'))
model.compile(optimizer='adam')
model.summary()

My input images shapes are: (128, 128, 2*3) [as I'm concatenating 2 input images]

My output image shape is: (128, 128, 3)

Temp Expt
  • 305
  • 1
  • 4
  • 17

1 Answers1

0

You have applied the conv layer after Flatten(). This causes error because after flattening the data flowing through the Network is no more a 2D object.

I suggest you to keep the convolutional and the recurrent phases separated. First, you apply convolution to images, training the model to extract their relevant features. Later, you push these features into LSTM layers, so that you can capture also the information hidden in their sequence.

Hope this helps, otherwise let me know.

--

EDIT:

According to the error that you get, it seems that you are also not feeding the exact input shape. Keras is saying: "I need 5 dimensions, but you gave me 4". A TimeDistributed() layers needs a shape such as: (sample, time, width, length, channel). Your input lacks time, apparently.

I suggest you to print your model.summary() before running, and check the layer called time_distributed_1_input. That's the one your compiler is upset with.

Leevo
  • 1,683
  • 2
  • 17
  • 34
  • Could you please provide an example with the above code. – Temp Expt May 27 '19 at 09:16
  • I don't know your dataset, or your preferences about NN architecture. You should go for some sequence of layers like: `[ conv & maxpool; LSTM; Dense + dropout ]`. That was just a hint. I pointed out the fact that you cannot put a `Conv2D()` layer after you `Flatten()`. That's going to return error for sure. – Leevo May 27 '19 at 10:05
  • `input_images = np.zeros((batch_size, config.width, config.height, 3 * 2)) output_images = np.zeros((batch_size, config.width, config.height, 3))` This is my input size. – Temp Expt May 27 '19 at 13:59
  • Your `input_images` has four dimensions. When you use `TimeDistributed` layers you have to include also a fifth dimension: 'time'. In fact, your error message says: "I expected your timedistributed object to have 5 dimensions, but I got only 4". You can read more about this [here](https://stackoverflow.com/questions/47305618/what-is-the-role-of-timedistributed-layer-in-keras) and [here](https://machinelearningmastery.com/timedistributed-layer-for-long-short-term-memory-networks-in-python/). – Leevo May 27 '19 at 14:57