The solution can be divided to 2 parts.
1. Replicate the graph of the layer
This is straightforward: just use the same code that you created that layer to do that. I suggest using Keras instead of raw TensorFlow — that will give you more flexibility and easiness in doing this step.
2. Copy the weights
The idea is you only need to copy tf.Variables
, which are basically a group of following ops: initializer
, kernel
, and assign
. Here is a good explanation. So the code will look as follows:
vars = tf.trainable_variables() # getting the variables
vars_vals = sess.run(vars) # getting their weights as numpy arrays
vars_duplicates = ... # here, get the weights of your layer,
# that should be in the same order
for var, val in zip(vars_duplicates, vars_vals):
var.load(val, sess)