Using TensorFlow in Python, I would like to have a "parent" tensor T3, which consists of some combination of (and operation on) two "child" tensors, T1 and T2 (in fact, T1 and T2 are also combinations of / operations on other tensors).
I would like to run an optimizer on a loss function of the "parent" tensor T3, while holding one of the "children", say, T1 (and all of its possible "grandchildren" tensors), constant.
It is worth noting that T3 is returned from a function, and T1 and T2 are created within that function. The optimizer is run on the returned value of T1. Therefore, one of my concerns is that T1 and T2 might(?) go out of scope, and their variable names may not be available to the main script(?). Below is a very abbreviated form of how I initialized the tensors:
def make_T3():
T1 = foo_1(name='T1')
T2 = foo_2(name='T2')
T3 = foo_3(T1,T2, name='T3')
return T3
Very similar are this question and this question; however, because of the way the tensors are declared, I am not sure the same approach works, for two reasons: (1.) I do not know whether running the optimizer on all variables except for the one with the name "T1" (therefore var_list
in the optimizer will include the "grandchildren" tensors contained in T1) will optimize the "grandchildren" tensors contained within T1 or not. (2.) I am not sure if the variable scope is an issue.
So, how would I properly optimize a loss function of T3 while holding its child, T1, constant?