Today I add a learning rate decay to my LSTM in Tensorflow.
I change
train_op = tf.train.RMSPropOptimizer(lr_rate).minimize(loss)
to
lr = tf.Variable(0.0,trainable=False)
and run every train step
sess.run(tf.assign(lr, lr_rate*0.9**epoch))
However, this change increases the execution time from ~7 minutes to over ~20 minutes.
My question is: Why does this change increase the execution time?
An obvious work-around is to do the assignment only every 1000 iterations. However, I'd like to understand the reasoning behind this.
- Does sess.run() take extra time?
- Does tf.asign() take extra time?
- Could I implement this tf.assign() in another, more efficient, way?