I want to train a convolutional neural network (with MNIST data set and TensorFlow) a few times new and get every time the same results of the accuracy. To get this i:
- Save an untrained only initialized (global_variables_initializer) net
- Load every time I start the training this untrained net
- Set mnist.train.next_batch shuffle=False, so the image sequence is every time the same
I have done this before with a feed forward net (3 hidden layer) and every time I run this python script I get the exact same values for loss and accuracy.
But, the "same" script with changing the model from a feed forward net to a convolutional neural net make every time I run the script a little different loss/accuracy.
So I reduce the batch size to one and look for each image the loss value and see that the first two images always have the same loss value, but the rest is every time I run the script a little different.
Any idea why?