You were actually in the right direction with the snippet you provided :)
Step 1: get the name of previously trainable variables
The most tricky part is to get the names of previously trainable variables. Hopefully the model was created with some high-level frameworks, like keras
or tf.slim
- they wraps their variables nicely in something like conv2d_1/kernel
, dense_1/bias
, batch_normalization/gamma
, etc.
If you're not sure, the most useful thing to do is to visualize the graph...
# read graph definition
with tf.gfile.GFile('frozen.pb', 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
# now build the graph in the memory and visualize it
with tf.Graph().as_default() as graph:
tf.import_graph_def(graph_def, name="prefix")
writer = tf.summary.FileWriter('out', graph)
writer.close()
... with tensorboard:
$ tensorboard --logdir out/
and see for yourself what the graph looks like and what the naming is.
Step 2: replace constants with variables (the fun part :D)
All you need is the magical library called tf.contrib.graph_editor
. Now let's say you've stored the names of previously trainable ops (that previously were variables but now they are Const
) in probable_variables
(as in your Edit 2).
Note: remember the difference between ops
, tensors
, and variables
. Ops are elements of the graph, tensor is a buffer that contains results of ops, and variables are wrappers around tensors, with 3 ops: assign
(to be called when you initialize the variable), read
(called by other ops, e.g. conv2d
), and ref tensor
(which holds the values).
Note 2: graph_editor
can only be run outside a session – you cannot make any graph modification online!
import numpy as np
import tensorflow.contrib.graph_editor as ge
# load the graphdef into memory, just as in Step 1
graph = load_graph('frozen.pb')
# create a variable for each constant, beware the naming
const_var_name_pairs = []
for name in probable_variables:
var_shape = graph.get_tensor_by_name('{}:0'.format(name)).get_shape()
var_name = '{}_a'.format(name)
var = tf.get_variable(name=var_name, shape=var_shape, dtype='float32')
const_var_name_pairs.append((name, var_name))
# from now we're going to work with GraphDef
name_to_op = dict([(n.name, n) for n in graph.as_graph_def().node])
# magic: now we swap the outputs of const and created variable
for const_name, var_name in const_var_name_pairs:
const_op = name_to_op[const_name]
var_reader_op = name_to_op[var_name + '/read']
ge.swap_outputs(ge.sgv(const_op), ge.sgv(var_reader_op))
# Now we can safely create a session and copy the values
sess = tf.Session(graph=graph)
for const_name, var_name in const_var_name_pairs:
ts = graph.get_tensor_by_name('{}:0'.format(const_name))
var = tf.get_variable(var_name)
var.load(ts.eval(sess))
# All done! Now you can make sure everything is correct by visualizing
# and calculate outputs for some inputs.
PS: this code was not tested; however, i've been using graph_editor
and performing network surgery quite often lately, so I think it should mostly be correct :)