1

I am working with a CNN and my professor wants me to try and include some information that is relevant, but isn't available in the images itself. As of right now, the data is a 1-D array. He thinks that adding it after the flattening layer and before the dense layers should be possible but neither of us are quite knowledgeable enough for it yet.

model = Sequential()
for i, feat in enumerate(args.conv_f):
    if i==0:
        model.add(Conv2D(feat, input_shape=x[0].shape, kernel_size=3, padding = 'same',use_bias=False))
    else:
        model.add(Conv2D(feat, kernel_size=3, padding = 'same',use_bias=False))
        model.add(BatchNormalization())
        model.add(LeakyReLU(alpha=args.conv_act))
        model.add(Conv2D(feat, kernel_size=3, padding = 'same',use_bias=False))
        model.add(BatchNormalization())
        model.add(LeakyReLU(alpha=args.conv_act))
        model.add(Dropout(args.conv_do[i]))

model.add(Flatten())

#Input code here

denseArgs = {'use_bias':False}
for i,feat in enumerate(args.dense_f):
    model.add(Dense(feat,**denseArgs))
    model.add(BatchNormalization())
    model.add(LeakyReLU(alpha=args.dense_act))
    model.add(Dropout(args.dense_do[i]))
model.add(Dense(1))

We could be wrong, obviously, so any help is appreciated! Thanks!

brandon
  • 13
  • 3

1 Answers1

1

One approach I know of requires the use of the functional API of keras. This means you would have to drop the Sequential approach you are currently using.

Using a toy model as an example, let the bloc:

img_input = Input((64, 64, 1))
model = Conv2D(20, (5, 5))(img_input)
model = MaxPooling2D((2, 2))(model)

model = Flatten()(model)

be the convolutional layers of a CNN with final flattening. It is possible to add information by concatenating the last model layer with the new information. The new information can be packaged by creating a short model (here af_input) with just an input layer.

As an example:

af_input = Input(shape=(2,))

model = Concatenate()([model, af_input])

model = Dense(120, activation='relu')(model)
model = Dropout(0.1)(model)
model = Dense(100, activation='relu')(model)

predictions = Dense(2)(model)

fullmodel = Model(inputs=[img_input,af_input], outputs=predictions)

So now the results of the flatten layer of the CNN will be concatenated with a vector of extra information (here 2 features).

You can then keep adding layers to the networks as usual.

I suggest you check the stackoverflow link: How to concatenate two layers in keras?

for another example and a good explanation.

RCoray
  • 21
  • 3