The question is, whether just changing the learning_rate
argument in tf.train.AdamOptimizer
actually results in any changes in behaviour:
Let's say the code looks like this:
myLearnRate = 0.001
...
output = tf.someDataFlowGraph
trainLoss = tf.losses.someLoss(output)
trainStep = tf.train.AdamOptimizer(learning_rate=myLearnRate).minimize(trainLoss)
with tf.Session() as session:
#first trainstep
session.run(trainStep, feed_dict = {input:someData, target:someTarget})
myLearnRate = myLearnRate * 0.1
#second trainstep
session.run(trainStep, feed_dict = {input:someData, target:someTarget})
Would the decreased myLearnRate
now be applied in the second trainStep
? This is, is the creation of the node trainStep
only evaluated once:
trainStep = tf.train.AdamOptimizer(learning_rate=myLearnRate).minimize(trainLoss)
Or is it evaluated with every session.run(train_step)
? How could I have checked in my AdamOptimizer
in Tensorflow, whether it did change the Learnrate.
Disclaimer 1: I'm aware manually changing the LearnRate is bad practice.
Disclaimer 2: I'm aware there is a similar question, but it was solved with inputting a tensor as learnRate
, which is updated in every trainStep
(here). It makes me lean towards assuming it would only work with a tensor as input for the learning_rate
in AdamOptimizer
, but neither am I sure of that, nor can I understand the reasoning behind it.