4

I am trying to re-implement Multi-View CNN (MVCNN) in Tensorflow 2.0. However, from what I see, keras layers do not have the options reuse=True|False like in tf.layers. Is there any way that I can define my layers which share parameters using the new API? Or I need to build my model in a TFv1 fashion?

Thank you very much!

Tai Christian
  • 654
  • 1
  • 10
  • 21

1 Answers1

4

To share the parameters of a model you just have to use the same model. This is the new paradigm introduced in TensorFlow 2.0; In TF 1.xt we were using a graph-oriented approach, where we need to re-use the same graph to share the variables, but now we can just re-use the same tf.keras.Model object with different inputs.

Is the object that carries its own variables.

Using a Keras model and tf.GradientTape you can train a model sharing the variables easily as shown in the example below.


# This is your model definition
model = tf.keras.Sequential(...)

#input_1,2 are model different inputs

with tf.GradientTape() as tape:
  a = model(input_1)
  b = model(input_2)
  # you can just copute the loss
  loss = a + b

# Use the tape to compute the gradients of the loss
# w.r.t. the model trainable variables

grads = tape.gradient(loss, model.trainable_varibles)

# Opt in an optimizer object, like tf.optimizers.Adam
# and you can use it to apply the update rule, following the gradient direction

opt.apply_gradients(zip(grads, model.trainable_variables))

nessuno
  • 26,493
  • 5
  • 83
  • 74
  • Thank you @nessuno! So, for example, in the case of multi-view model, I would like to reuse the first half of the model for extracting features from multiple inputs and the second half for combining their concatenated features. Do I need to break the model into 2? – Tai Christian Jul 05 '19 at 10:24
  • You have to define a Keras model with 2 outputs :) have a look at the functional keras API https://www.tensorflow.org/beta/guide/keras/functional – nessuno Jul 05 '19 at 10:54