I'm doing some experimentation with TensorFlow, and have run into a snag. I'm trying to use TF to evalute a change in a model, then either retain or revert the model, based on the resultant change in loss function. I've got the hard part (conditional control) figured out, but I'm stuck on something that should be fairly straightforward: I can't seem to store a tf.trainable_variables
for an iteration, then restore it if needed.
Let's say a build an Op:
...
store_trainable_vars = []
for v in tf.trainable_variables():
store_trainable_vars.append(v)
...
Then later, I want to restore tf.trainable_variables
to the value it had when this Op was last run. I'd want to do something like:
def reject_move():
revert_state = []
for (v, s) in zip(tf.trainable_variables(), store_trainable_vars):
revert_state.append(tf.assign(v, s, name="revert_state"))
return(revert_state)
Obviously, this will re-evaluate store_trainable_vars
, which in turn links to the present value of tf.trainable_variables()
, obviating the revert_state
Op. I need some way to store and retrieve the value of Tensors without calling back to the present value of those Tensors. Something like
...
store_trainable_vars = []
for v in tf.trainable_variables():
store_trainable_vars.append(v.value_right_now())
...
where v.value_right_now()
returns a constant that won't change until overwritten.
I know I could use Saver, but that solution writes to the disk, which is not acceptable for this application as it will run inside a training loop.
I'm probably missing something obvious - any guidance would be appreciated.