I have a piece of code (not mine) that defines a non-trainable variable that is used to define another property of the layer, which looks something like
initial_weight_val = 1.0
w = my_layer.add_weight(name=layer.name + '/my_weight', shape=(),
initializer=tf.initializers.constant(initial_weight_val),
trainable=False)
# Use w to set another parameter of the layer.
my_layer.the_parameter = some_function(w)
Please, do not tell me what a non-trainable variable is (Of course, I know what it is?), which is also discussed in What is the definition of a non-trainable parameter?.
However, given that w
will not be changed (I think), I don't get why someone would define such a variable, rather than simply using the Python variable initial_weight_val
directly, especially when using TensorFlow 2.0 (which is my case and the only case I am interested in). Of course, one possibility would be that this variable could become trainable, in case one needs it to be trainable later, but why should one anticipate this, anyway?
Can I safely use initial_weight_val
to define the_parameter
, i.e. pass initial_weight_val
to some_function
rather than w
?
I am concerned with this issue because I cannot save a model with a variable, because I get the error "variable is not JSON serializable" (Keras and TF are so buggy, btw!), so I was trying to understand the equivalence between user-defined non-trainable variables and Python variables.