The method you posted seems to fail if I dont give the feed_dict again in the sess.run(train_step). I don't know why require the feed_dict, but it is possible that run again all the accumulator adding with the last example repeated. This is what I had to do in my case:
self.session.run(zero_ops)
for i in range(0, mini_batch):
self.session.run(accum_ops, feed_dict={self.ph_X: imgs_feed[np.newaxis, i, :, :, :], self.ph_Y: flow_labels[np.newaxis, i, :, :, :], self.keep_prob: self.dropout})
self.session.run(norm_acums, feed_dict={self.ph_X: imgs_feed[np.newaxis, i, :, :, :], self.ph_Y: flow_labels[np.newaxis, i, :, :, :], self.keep_prob: self.dropout})
self.session.run(train_op, feed_dict={self.ph_X: imgs_feed[np.newaxis, i, :, :, :], self.ph_Y: flow_labels[np.newaxis, i, :, :, :], self.keep_prob: self.dropout})
And for normalize the gradient I understand that is only divide the accumulated gradietn by the batchsize so I only add a new op
norm_accums = [accum_op/float(batchsize) for accum_op in accum_ops]
Did someone have that same issue of feed_dict?
*UPDATE
As I suppused that is wrong, it runs the all graph again with the last example in the batch.
This little code test that
import numpy as np
import tensorflow as tf
ph = tf.placeholder(dtype=tf.float32, shape=[])
var_accum = tf.get_variable("acum", shape=[],
initializer=tf.zeros_initializer())
acum = tf.assign_add(var_accum, ph)
divide = acum/5.0
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for i in range(5):
sess.run(acum, feed_dict={ph: 2.0})
c = sess.run([divide], feed_dict={ph: 2.0})
#10/5 = 2
print(c)
#but it gives 2.4, that is 12/5, so sums one more time
I figured out how to solve this. So, tensorflow has conditional operations. I put
the accumulation in one branch and the last accumulation with normalization and update in another branch. My code is a mess, but for fast check of I'm saying I let a little code of an example of use.
import numpy as np
import tensorflow as tf
ph = tf.placeholder(dtype=tf.float32, shape=[])
#placeholder for conditional braching in the graph
condph = tf.placeholder(dtype=tf.bool, shape=[])
var_accum = tf.get_variable("acum", shape=[], initializer=tf.zeros_initializer())
accum_op = tf.assign_add(var_accum, ph)
#function when condition of condph is True
def truefn():
return accum_op
#function when condtion of condph is False
def falsefn():
div = accum_op/5.0
return div
#return the conditional operation
cond = tf.cond(condph, truefn, falsefn)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for i in range(4):
#run only accumulation
sess.run(cond, feed_dict={ph: 2.0, condph: True})
#run acumulation and divition
c = sess.run(cond, feed_dict={ph: 2.0, condph: False})
print(c)
#now gives 2
*IMPORTANT NOTE: Forget everything didnt work. The optimizers drop a failure.