0

I would like to modify or append keras input layer to a pretrained inception model. I know that there is a way to pop and append downstream layers. How about upstream layers?

For example, I would like to add a layer which will take my input image and branch it in 3 channels (I know there are other solutions, but let's try it):

from keras.applications.inception_v3 import InceptionV3
from keras.models import Model
from keras.layers import Dense, 

base_model = InceptionV3(weights='imagenet', include_top=False)

img = Input(( None, None, 1))
d0 = Dense(3,kernel_initializer='Ones', use_bias=False)
img3 = d0(img)

It turns I cannot set the input attribute easily like base_model.input = img3 -- it raises an exception.


Update:

I actually need to modify both upstream and downstream layers. Currently I am cropping downstream layers in my network in following way:

n_classes = 1
final_activation = 'sigmoid'
ndense=64
dropout=0.5
base_trainable=False

base_model = InceptionV3(weights='imagenet', include_top=False)
img = Input(( None, None, 1))
d0 = Conv2D(3, (1,1), kernel_initializer='Ones', use_bias=False)
img3 = d0(img)
base_model(img3)

# get third Concatenation layer and crop the network on it:
cc=0
poptherest = False
for nn, la in enumerate(base_model.layers):
    if type(la) is keras.layers.Concatenate:
        if cc==3:
            x = la.output
            break
        cc+=1
base_model.layers = base_model.layers[:nn+1]

#x = [la.output for la in base_model.layers if type(la) is keras.layers.Concatenate][3]
x = GlobalAveragePooling2D()(x)
# let's add a fully-connected layer
x = Dropout(dropout)(x)

x = Dense(ndense, activation='relu')(x)
# and a logistic layer -- let's say we have 200 classes
predictions = Dense(n_classes, activation=final_activation)(x)

# this is the model we will train
model = Model(inputs=img, outputs=predictions)

How do I add the above mentioned modification to my code?

Dima Lituiev
  • 12,544
  • 10
  • 41
  • 58

1 Answers1

1

For complex models, and necessarily for models with branches, the idea of popping/adding layers doesn't apply, because it's for sequential models.

The good news is that using the Functional API Model you can do almost anything.

You're almost correct on your model, but you can't set "model.input". But you can pass the input to the model as if the model were a layer:

output = base_model(img3) #if an exception appears here, it's probably a shape problem
myModel = Model(img,output)

Now, there is a problem with your input. You cannot use "None". None is reserved for Keras to separate the variable batch size. You must define exactly the size of your image, which should be:

img = Input((horizPixels,vertPixels,channels))  

Please notice that your img3 tensor must have a shape compatible with the inception expected input. If inception is expecting three channel images, make sure your img3 is shaped like (x,y,channels) or (None,x,y,channels).


I'm not sure I understood what you want to achieve, but if your input image has only one channel and you want to create a 3 channel image from it, I suggest you use a convolutional layer with 3 filters:

d0 = Conv2D(filter=3, kernel_size=(3,3), .....)

But if you want a layer that doesn't learn anything and already knows what to do, maybe it's better to just separate the image yourself in your training data instead of leaving the task to the model.

Daniel Möller
  • 84,878
  • 18
  • 192
  • 214
  • Thank you. So how do I add it to the code for cropping /replacing downstream layers? (see update) – Dima Lituiev Jun 30 '17 at 19:04
  • I think you just remove this line and it seems ok: `base_model.layers = base_model.layers[:nn+1]`. The rest of base_model will still be there, but not in **your** model, only in the old model. It will be simply there not working, because the "path" you defined doesn't flow through the remaining layers. – Daniel Möller Jun 30 '17 at 19:13