1

I am trying to implement a 3D convolutional neural network with medical imaging that are made up of 10 contiguous image slices that are 64x64 in shape. They are gray scale images. Therefore my input dimension is 64 x 64 x 10 and my first layer is

model = Sequential()
model.add(Conv3D(32, kernel_size=(3,3,3), strides=(1, 1, 1), input_shape=(64, 64, 10)))
model.add(Activation('relu'))
model.add(MaxPooling3D(pool_size=(2, 2, 2)))

With this code I get the error

Input 0 is incompatible with layer conv3d_1: expected ndim=5, found ndim=4

Therefore I reshaped my input to

model = Sequential()
model.add(Conv3D(32, kernel_size=(3,3,3), strides=(1, 1, 1), input_shape=(64, 64, 10, 1)))
model.add(Activation('relu'))
model.add(MaxPooling3D(pool_size=(2, 2, 2)))

Now I get the error

ValueError: ('Input data in `NumpyArrayIterator` should have rank 4. You passed an array with shape', (128, 64, 64, 10, 1))

I have tried to override this in the Keras code but that leads to more errors and I am pretty sure that a volume of slices can be inputed - I just can't see where the issue is.

Ioannis Nasios
  • 8,292
  • 4
  • 33
  • 55
GhostRider
  • 2,109
  • 7
  • 35
  • 53
  • 1
    Could you edit your question showing how you initialise your numpy array, and also what you are passing to your model compile and fit functions. – Phil Dec 13 '17 at 17:26
  • **Where** are you getting this error? – Daniel Möller Dec 13 '17 at 17:48
  • 1
    Looks to me like somewhere you batched the data with a batch size of 128, I would say your input shape should look like (1, 64,64,10) not (64,64,10,1), and instead of one, try batching it, because it seems like somewhere in your code, you batched some array. Would need to see more to figure out. – Hasnain Raza Dec 14 '17 at 15:01

2 Answers2

2

This is a headache I've had for a few days.

What is happening is that Keras automatically sets the number of channels in an image as the depth, and uses it to set the final filter size. You should use Conv2D instead due to you have 3-dim images (you can understand it as RGB images).

As I have said, Keras fix the depth automatically as the number of channels. So if you use Conv2D an fix the filter size as (5x5) it really will be (5x5xnºchannels).

Replace:

model.add(Conv3D(32, kernel_size=(3,3,3), strides=(1, 1, 1), input_shape=(64, 64, 10)))

To:

model.add(Conv2D(32, kernel_size=(3,3), strides=(1, 1), input_shape=(64, 64, 10)))

You can see really what is happening in this image: KerasConv2D

In case you want to you want to work combining different channels you will have to create different towers with Keras (that receive different channels) and then put them together.

You can see also what is happening in this link.

2

Input_shape for Conv3D has 4 dimensions (time_sequence, width, height, channels)

In your case:

input_shape = (10, 64, 64, 1)
photeesh
  • 312
  • 5
  • 19