13

I have a code as follows. What I want to do is to share the same weights in two dense layers.

The equation for op1 and op2 layer will be like that

op1 = w1y1 + w2y2 + w3y3 + w4y4 + w5y5 + b1

op2 = w1z1 + w2z2 + w3z3 + w4z4 + w5z5 + b1

here w1 to w5 weights are shared between op1 and op2 layer inputs which are (y1 to y5) and (z1 to z5) respectively.

ip_shape1 = Input(shape=(5,))
ip_shape2 = Input(shape=(5,))

op1 = Dense(1, activation = "sigmoid", kernel_initializer = "ones")(ip_shape1)
op2 = Dense(1, activation = "sigmoid", kernel_initializer = "ones")(ip_shape2)

merge_layer = concatenate([op1, op2])
predictions = Dense(1, activation='sigmoid')(merge_layer)

model = Model(inputs=[ip_shape1, ip_shape2], outputs=predictions)

Thanks in advance.

Mahek Shah
  • 485
  • 2
  • 5
  • 17

1 Answers1

15

This uses the same layer for both sides. (Weighs and bias are shared)

ip_shape1 = Input(shape=(5,))
ip_shape2 = Input(shape=(5,))

dense = Dense(1, activation = "sigmoid", kernel_initializer = "ones")

op1 = dense(ip_shape1)
op2 = dense(ip_shape2)

merge_layer = Concatenate()([op1, op2])
predictions = Dense(1, activation='sigmoid')(merge_layer)

model = Model(inputs=[ip_shape1, ip_shape2], outputs=predictions)
Daniel Möller
  • 84,878
  • 18
  • 192
  • 214
  • 6
    it's also described here: https://keras.io/getting-started/functional-api-guide/#shared-layers – Ufos Oct 01 '18 at 14:30
  • 1
    What if you want the weights to be trainable in one network but not in the other, but you want the 2nd one to be updated whenever the 1st one is trained? – hosford42 Nov 10 '19 at 19:00
  • 3
    Add a `Lambda(lambda x: K.stop_gradient(x))` at the end of side 2. – Daniel Möller Nov 10 '19 at 20:36