9

The question is, whether just changing the learning_rate argument in tf.train.AdamOptimizer actually results in any changes in behaviour: Let's say the code looks like this:

myLearnRate = 0.001
...
output = tf.someDataFlowGraph
trainLoss = tf.losses.someLoss(output)
trainStep = tf.train.AdamOptimizer(learning_rate=myLearnRate).minimize(trainLoss)
with tf.Session() as session:
    #first trainstep
    session.run(trainStep, feed_dict = {input:someData, target:someTarget})
    myLearnRate = myLearnRate * 0.1
    #second trainstep
    session.run(trainStep, feed_dict = {input:someData, target:someTarget})

Would the decreased myLearnRate now be applied in the second trainStep? This is, is the creation of the node trainStep only evaluated once:

trainStep = tf.train.AdamOptimizer(learning_rate=myLearnRate).minimize(trainLoss)

Or is it evaluated with every session.run(train_step)? How could I have checked in my AdamOptimizer in Tensorflow, whether it did change the Learnrate.

Disclaimer 1: I'm aware manually changing the LearnRate is bad practice. Disclaimer 2: I'm aware there is a similar question, but it was solved with inputting a tensor as learnRate, which is updated in every trainStep (here). It makes me lean towards assuming it would only work with a tensor as input for the learning_rate in AdamOptimizer, but neither am I sure of that, nor can I understand the reasoning behind it.

Maxim
  • 52,561
  • 27
  • 155
  • 209
LJKS
  • 899
  • 2
  • 9
  • 17

2 Answers2

10

The short answer is that no, your new learning rate is not applied. TF builds the graph when you first run it, and changing something on the Python side will not translate to a change in the graph at run time. You can, however, feed a new learning rate into your graph pretty easily:

# Use a placeholder in the graph for your user-defined learning rate instead
learning_rate = tf.placeholder(tf.float32)
# ...
trainStep = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(trainLoss)
applied_rate = 0.001  # we will update this every training step
with tf.Session() as session:
    #first trainstep, feeding our applied rate to the graph
    session.run(trainStep, feed_dict = {input: someData,
                                        target: someTarget,
                                        learning_rate: applied_rate})
    applied_rate *= 0.1  # update the rate we feed to the graph
    #second trainstep
    session.run(trainStep, feed_dict = {input: someData,
                                        target: someTarget,
                                        learning_rate: applied_rate})
Engineero
  • 12,340
  • 5
  • 53
  • 75
  • May i ask, why Tensorflow evaluates the Tensor in each step, but the Pyhton Variable only once? – LJKS Oct 10 '17 at 15:34
  • Basically for efficiency purposes. The whole forward and backward propagation steps of the training process are handled by a highly-optimized implementation of your network that is built when you run your code. That graph lives in a different environment than your Python script (I think it's all implemented in C++, not sure), and you have to explicitly pipe things to it in order to bridge that gap. The Python script handles batch generation, updating fed parameters, and looping, but doesn't do any heavy lifting. – Engineero Oct 10 '17 at 15:41
6

Yes, the optimizer is created only once:

tf.train.AdamOptimizer(learning_rate=myLearnRate)

It remembers the passed learning rate (in fact, it creates a tensor for it, if you pass a floating number) and your future changes of myLearnRate don't affect it.

Yes, you can create a placeholder and pass it to the session.run(), if you really want to. But, as you said, it's pretty uncommon and probably means you are solving your origin problem in the wrong way.

Maxim
  • 52,561
  • 27
  • 155
  • 209
  • 1
    Sadly can't mark both answers as valid solutions, but thank you so much for your effort! – LJKS Oct 10 '17 at 15:33