I want to get pretrained VGG16 model in Keras, remove its output layer, and then put a new output layer with the number of classes suited for my problem, and then to fit it on new data. For this reason, I am trying to use the model here: https://keras.io/applications/#vgg16, but since it is not Sequential, I cannot just model.pop()
. Popping from layers and adding it also does not work, because in the predictions it still expects the old shape. How would I do that? Is there a way to convert this type of model to Sequential
?
Asked
Active
Viewed 1.8k times
21

Hooked
- 84,485
- 43
- 192
- 261

Hristo Vrigazov
- 1,357
- 2
- 12
- 20
2 Answers
56
You can use pop()
on model.layers
and then use model.layers[-1].output
to create new layers.
Example:
from keras.models import Model
from keras.layers import Dense,Flatten
from keras.applications import vgg16
from keras import backend as K
model = vgg16.VGG16(weights='imagenet', include_top=True)
model.input
model.summary(line_length=150)
model.layers.pop()
model.layers.pop()
model.summary(line_length=150)
new_layer = Dense(10, activation='softmax', name='my_dense')
inp = model.input
out = new_layer(model.layers[-1].output)
model2 = Model(inp, out)
model2.summary(line_length=150)
Alternatively, you can use include_top=False
option of these models. In this case if you need to use flatten the layer then you need to pass the input_shape
also.
model3 = vgg16.VGG16(weights='imagenet', include_top=False, input_shape=(224, 224, 3))
model3.summary(line_length=150)
flatten = Flatten()
new_layer2 = Dense(10, activation='softmax', name='my_dense_2')
inp2 = model3.input
out2 = new_layer2(flatten(model3.output))
model4 = Model(inp2, out2)
model4.summary(line_length=150)

indraforyou
- 8,969
- 3
- 49
- 40
-
5This is very important! iIt solves my problem which I have been struggling for hours! Thank you ! – user40780 May 04 '17 at 01:20
-
6Incredibly useful. This should be in official Keras documentation. – Łukasz Sromek Jan 17 '18 at 21:03
-
If the model was trained, and we pop the last layer, does it affect the weight of the other layers? – Wesam Na Apr 06 '18 at 08:54
-
@WesamNa no. the weights of remaining layers stay the same. – Helder Jul 09 '18 at 13:56
-
It does not work anymore on Tensorflow 2.5.0 – SashimiDélicieux May 27 '21 at 12:33
6
We can transform VGG model into Sequential as:
# Create VGG model
vgg_model = keras.applications.vgg16.VGG16(weights='imagenet')
# Created model is of type Model
type(vgg_model)
>> keras.engine.training.Model
# Convert it to Sequential
model = Sequential()
for layer in vgg_model.layers:
model.add(layer)
# Now, check the model type, its Sequential!
type(model)
>> keras.models.Sequential
# Verify the model details
model.summary()
>>
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_15 (InputLayer) (None, 224, 224, 3) 0
_________________________________________________________________
block1_conv1 (Conv2D) (None, 224, 224, 64) 1792
_________________________________________________________________
block1_conv2 (Conv2D) (None, 224, 224, 64) 36928
_________________________________________________________________
block1_pool (MaxPooling2D) (None, 112, 112, 64) 0
_________________________________________________________________
block2_conv1 (Conv2D) (None, 112, 112, 128) 73856
_________________________________________________________________
block2_conv2 (Conv2D) (None, 112, 112, 128) 147584
_________________________________________________________________
block2_pool (MaxPooling2D) (None, 56, 56, 128) 0
_________________________________________________________________
block3_conv1 (Conv2D) (None, 56, 56, 256) 295168
_________________________________________________________________
block3_conv2 (Conv2D) (None, 56, 56, 256) 590080
_________________________________________________________________
block3_conv3 (Conv2D) (None, 56, 56, 256) 590080
_________________________________________________________________
block3_pool (MaxPooling2D) (None, 28, 28, 256) 0
_________________________________________________________________
block4_conv1 (Conv2D) (None, 28, 28, 512) 1180160
_________________________________________________________________
block4_conv2 (Conv2D) (None, 28, 28, 512) 2359808
_________________________________________________________________
block4_conv3 (Conv2D) (None, 28, 28, 512) 2359808
_________________________________________________________________
block4_pool (MaxPooling2D) (None, 14, 14, 512) 0
_________________________________________________________________
block5_conv1 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_conv2 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_conv3 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_pool (MaxPooling2D) (None, 7, 7, 512) 0
_________________________________________________________________
flatten (Flatten) (None, 25088) 0
_________________________________________________________________
fc1 (Dense) (None, 4096) 102764544
_________________________________________________________________
fc2 (Dense) (None, 4096) 16781312
_________________________________________________________________
predictions (Dense) (None, 1000) 4097000
=================================================================
Total params: 138,357,544
Trainable params: 138,357,544
Non-trainable params: 0
_________________________________________________________________
# Now, that its sequential, we can perform usual operations.
model.layers.pop()
# Freeze the layers
for layer in model.layers:
layer.trainable = False
# Add 'softmax' instead of earlier 'prediction' layer.
model.add(Dense(2, activation='softmax'))
# Check the summary, and yes new layer has been added.
model.summary()
Layer (type) Output Shape Param #
=================================================================
input_15 (InputLayer) (None, 224, 224, 3) 0
_________________________________________________________________
block1_conv1 (Conv2D) (None, 224, 224, 64) 1792
_________________________________________________________________
block1_conv2 (Conv2D) (None, 224, 224, 64) 36928
_________________________________________________________________
block1_pool (MaxPooling2D) (None, 112, 112, 64) 0
_________________________________________________________________
block2_conv1 (Conv2D) (None, 112, 112, 128) 73856
_________________________________________________________________
block2_conv2 (Conv2D) (None, 112, 112, 128) 147584
_________________________________________________________________
block2_pool (MaxPooling2D) (None, 56, 56, 128) 0
_________________________________________________________________
block3_conv1 (Conv2D) (None, 56, 56, 256) 295168
_________________________________________________________________
block3_conv2 (Conv2D) (None, 56, 56, 256) 590080
_________________________________________________________________
block3_conv3 (Conv2D) (None, 56, 56, 256) 590080
_________________________________________________________________
block3_pool (MaxPooling2D) (None, 28, 28, 256) 0
_________________________________________________________________
block4_conv1 (Conv2D) (None, 28, 28, 512) 1180160
_________________________________________________________________
block4_conv2 (Conv2D) (None, 28, 28, 512) 2359808
_________________________________________________________________
block4_conv3 (Conv2D) (None, 28, 28, 512) 2359808
_________________________________________________________________
block4_pool (MaxPooling2D) (None, 14, 14, 512) 0
_________________________________________________________________
block5_conv1 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_conv2 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_conv3 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_pool (MaxPooling2D) (None, 7, 7, 512) 0
_________________________________________________________________
flatten (Flatten) (None, 25088) 0
_________________________________________________________________
fc1 (Dense) (None, 4096) 102764544
_________________________________________________________________
fc2 (Dense) (None, 4096) 16781312
_________________________________________________________________
dense_4 (Dense) (None, 2) 2002
=================================================================
Total params: 134,262,546
Trainable params: 2,002
Non-trainable params: 134,260,544
_________________________________________________________________

Milind Deore
- 2,887
- 5
- 25
- 40
-
Quick question, doesn't `layer.trainable = False` freeze the layers? – Philipp HB Jul 05 '18 at 10:04
-
@PhilippHB Thats correct, `layer.trainable = False` will freeze all the layers. And this is the reason Trainable params are mere 2002 `Trainable params: 2,002` I had incorrect comment in the code snippet, i have now corrected it. Thanks for pointing out. – Milind Deore Jul 05 '18 at 10:09