0

I am new to DL. I am trying to use an InceptionV3 model and fine tune it to use it as a binary classifier. My code looks like this:

models=keras.applications.inception_v3.InceptionV3(weights='imagenet',include_top= False)


# add a global spatial average pooling layer
x = models.output
#x = GlobalAveragePooling2D()(x)
# add a fully-connected layer
x = Dense(1024, activation='relu')(x)
# and a logistic layer -- let's say we have 200 classes
predictions = Dense(2, activation='softmax')(x)

# this is the model we will train
model = Model(input=models.input, output=predictions)

for layer in model.layers[:len(model.layers)-2]:
    layer.trainable = False
for layer in model.layers[-2:]:
    layer.trainable = True

model.compile(loss='binary_crossentropy',
              optimizer='adam',
              metrics=['accuracy'])
model.fit(X_train, y_train, batch_size=batch_size, epochs=nb_epoch,
          verbose=1,
          validation_split=0.25,
          class_weight='auto')

X_train Shape: (80, 299, 299, 3)

X_test Shape: (20, 299, 299, 3)

y_train Shape: (80, 2)

y_test Shape: (20, 2)

But I am getting a value error.

    ValueError                                Traceback (most recent call last)
<ipython-input-9-c06b0b388969> in <module>
    217 
    218     model = cnn_model(X_train, y_train, kernel_size, nb_filters, channels, nb_epoch, batch_size,
--> 219                       nb_classes)
    220 
    221     print("Predicting")

<ipython-input-9-c06b0b388969> in cnn_model(X_train, y_train, kernel_size, nb_filters, channels, nb_epoch, batch_size, nb_classes)
    152               verbose=1,
    153               validation_split=0.25,
--> 154               class_weight='auto')
    155 
    156     return model

~\Anaconda3\envs\tf_gpu\lib\site-packages\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, **kwargs)
    950             sample_weight=sample_weight,
    951             class_weight=class_weight,
--> 952             batch_size=batch_size)
    953         # Prepare validation data.
    954         do_validation = False

~\Anaconda3\envs\tf_gpu\lib\site-packages\keras\engine\training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, check_array_lengths, batch_size)
    787                 feed_output_shapes,
    788                 check_batch_axis=False,  # Don't enforce the batch size.
--> 789                 exception_prefix='target')
    790 
    791             # Generate sample-wise weight values given the `sample_weight` and

~\Anaconda3\envs\tf_gpu\lib\site-packages\keras\engine\training_utils.py in standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
    126                         ': expected ' + names[i] + ' to have ' +
    127                         str(len(shape)) + ' dimensions, but got array '
--> 128                         'with shape ' + str(data_shape))
    129                 if not check_batch_axis:
    130                     data_shape = data_shape[1:]

ValueError: Error when checking target: expected dense_7 to have 4 dimensions, but got array with shape (80, 2)

I came across this answer https://stackoverflow.com/a/36842553 where the OP has mentioned that 3 classification layers must be changed to make it happen. Is there any way to do the same in Keras?

Is there a better way to use InceptionV3 model for classification?

  • how come this model is considered true? In official keras documentation it says to _train only the top layers well by setting the layers to non-trainable, compiling and training. Then freeze the bottom layers and recompile, train again the remaining top layers_ – bit_scientist Apr 14 '20 at 01:38

1 Answers1

1

You did not Flatten the Tensor for prediction which causes to throws such exception. The output tensors in your model have a shape of :

Tensor("dense_1/truediv:0", shape=(?, ?, ?, 2), dtype=float32)

While the labels have a shape of [80,2]. How to fix the issue?

Before passing the inception's output to the classifier, flatten the tensor:

import tensorflow as tf
from tensorflow.python.keras import Model
from tensorflow.python.keras.layers import Dense, Flatten, Input

inps = Input(shape=(299, 299, 3), name='image_input')
m = tf.keras.applications.inception_v3.InceptionV3(weights='imagenet', include_top=False)(inps)
x = Flatten()(m)
x = Dense(1024, activation='relu')(x)
predictions = Dense(2, activation='softmax')(x)

model = Model(inputs=inps, outputs=predictions)
model.compile('adam', 'mse')
Amir
  • 16,067
  • 10
  • 80
  • 119