Edited after question's edit
Will tf.train.get_global_step() increment global_step at every training step?
No. The optimizer will take care of the increment, tf.train.get_global_step()
only gets you the current variable defined to store the global step (if any was already defined).
and affect the learning_rate accordingly?
Yes, the learning rate schedule will internally get the value of the current global step and adjust the LR accordingly.
Update: some clarification
In TF there's a key difference between "variables" as are commonly intended in python (not tf.Variable()
) and tensors (tf.Variable
is a tensor).
When you call
global_step = tf.train.get_global_step()
(assuming a global step was prevously defined somewhere) you get a Tensor
object back, not an integer.
The underlying idea is to separate the construction phase of the computation, where you describe the operations applied to the data, from the actual execution, where you feed the data and get results. This often causes confusion at first, but it's a key point of TF's programming model (at least until TF 2.0).
If you want to get the current value of global_step
, you need to evaluate the graph. Assuming you already have a tf.Session()
defined, you either:
step_value = sess.run(global_step)
or alternatively:
step_value = global_step.eval(session=sess)
This is done internally from the LR schedule. At each step, it will get the current value of the global step and calculate the LR from it with the given parameters. Similarly, the optimizer internally will take care of updating the current global step value at each step, so unless you need the value for logging/debugging, you normally wouldn't explicitly evaluate global_step
yourself.