I'm looking to develop summaries during my NN training, similar to here, but all the examples I see are using feed_dict and not tf.data. My training and testing have separate initializers:
self.train_init = iterator.make_initializer(train_data) # initializer for train_data
self.test_init = iterator.make_initializer(test_data) # initializer for test_data
During my training, I initialize the training initializer with sess.run(self.train_init), but in order to test the accuracy I need to to initialize sess.run(self.test_init) I believe. Currently my code is shown below:
for i in range(100):
sess.run(self.train_init)
total_loss = 0
n_batches = 0
try:
while True:
_, l = sess.run([self.optimizer, self.loss])
total_loss += l
n_batches += 1
except tf.errors.OutOfRangeError:
pass
if i % (10/1) == 0:
print('Avg. loss epoch {0}: {1}'.format(i, total_loss/n_batches))
acc, summ = sess.run(self.accuracy, self.summary_op)
writer.add_summary(summ, i)
As it currently stands, accuracy is measured every 10 iterations, but its using the training batch, not the testing batch. I want to be able to see the training and testing accuracy over time in order to see clearly whether or not over-fitting is occurring (good training accuracy but poor testing accuracy).
I have no idea how to do this when I'm using tf.Data. How do I switch between initializers while going through 100 iterations, all the while creating summaries of what I need?