I am trying to implement a program to test the Tensorflow performance on GPU device. Data test is MNIST data, supervised training using Multilayer perceptron(Neural networks). I followed this simple example but I change the number of performance batch gradient to 10000
for i in range(10000) :
batch_xs, batch_ys = mnist.train.next_batch(100)
sess.run(train_step,feed_dict={x : batch_xs, y_ : batch_ys})
if i % 500 == 0:
print(i)
Eventually, when I check the predict accuracy using this code
correct_prediction = tf.equal(tf.argmax(y,1),tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction,"float"))
print(sess.run(accuracy,feed_dict={x:mnist.test.images,y_:mnist.test.labels}))
print(tf.convert_to_tensor(mnist.test.images).get_shape())
it turns out that the accuracy rate is different from CPU to GPU: when GPU returns the accuracy rate approximately 0.9xx while CPU returns only 0.3xx. Does anyone know the reason? or why can that issue happen?