-1

I tried running the deep MNIST tutorial code on my computer (https://github.com/tensorflow/tensorflow/blob/r1.4/tensorflow/examples/tutorials/mnist/mnist_deep.py), but it exits when trying to print out test accuracy. The only changes I made were changing the number of iterations to 100 and changing the frequency of printing to once every 10 iterations as follows:

Line 159:

for i in range(20000):  

became

for i in range(100):  

and Line 161:

if i % 100 == 0:  

became

if i % 10 == 0:  

This is what it outputs (ran in cmd):

C:\Users\Steven\Documents\Atom\tensorflow-tutorial>python -i mnist_deep.py
Extracting /tmp/tensorflow/mnist/input_data\train-images-idx3-ubyte.gz
Extracting /tmp/tensorflow/mnist/input_data\train-labels-idx1-ubyte.gz
Extracting /tmp/tensorflow/mnist/input_data\t10k-images-idx3-ubyte.gz
Extracting /tmp/tensorflow/mnist/input_data\t10k-labels-idx1-ubyte.gz
Saving graph to: C:\Users\Steven\AppData\Local\Temp\tmpeu8pfnwd
2018-01-18 21:35:00.216476: I C:\tf_jenkins\home\workspace\rel-win\M\windows\PY\
36\tensorflow\core\platform\cpu_feature_guard.cc:137] Your CPU supports instruct
ions that this TensorFlow binary was not compiled to use: AVX
step 0, training accuracy 0.24
step 10, training accuracy 0.16
step 20, training accuracy 0.42
step 30, training accuracy 0.64
step 40, training accuracy 0.7
step 50, training accuracy 0.68
step 60, training accuracy 0.74
step 70, training accuracy 0.74
step 80, training accuracy 0.84
step 90, training accuracy 0.78

C:\Users\Steven\Documents\Atom\tensorflow-tutorial>

Notice how once it's done training, the script exits by itself with no error instead of printing the test accuracy, even though I provided the -i tag. When I remove the line that prints test accuracy (Lines 167 and 168),

print('test accuracy %g' % accuracy.eval(feed_dict={
        x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))

the script then works perfectly. Therefore, it seems that line causes the script to exit somehow.

I've tried running the softmax tutorial (https://www.tensorflow.org/get_started/mnist/pros), which also prints test accuracy using the same dataset,

print(accuracy.eval(feed_dict = {x: mnist.test.images, y_: mnist.test.labels}))

and it works just fine:

C:\Users\Steven\Documents\Atom\tensorflow-tutorial>python -i mnist_softmax_tutor
ial.py
Extracting MNIST_data\train-images-idx3-ubyte.gz
Extracting MNIST_data\train-labels-idx1-ubyte.gz
Extracting MNIST_data\t10k-images-idx3-ubyte.gz
Extracting MNIST_data\t10k-labels-idx1-ubyte.gz
2018-01-18 21:49:34.174464: I C:\tf_jenkins\home\workspace\rel-win\M\windows\PY\
36\tensorflow\core\platform\cpu_feature_guard.cc:137] Your CPU supports instruct
ions that this TensorFlow binary was not compiled to use: AVX
0.9143
>>> exit()

C:\Users\Steven\Documents\Atom\tensorflow-tutorial>

I looked at another post with a similar error (Deep MNIST for Experts tutorial trouble / FailedPreconditionError), and it said to run the Windows installation verification script (https://gist.github.com/mrry/ee5dbcfdd045fa48a27d56664411d41c). However, I ran it and got no issues:

C:\Users\Steven\Documents\Atom\tensorflow-tutorial>python tensorflow_self_check.
py
TensorFlow successfully installed.
The installed version of TensorFlow does not include GPU support.

I also tried reinstalling TensorFlow (using pip uninstall and then pip install), but that did not fix the problem.

My python version is as follows:

Python 3.6.3 (v3.6.3:2c5fed8, Oct  3 2017, 18:11:49) [MSC v.1900 64 bit (AMD64)] on win32

I installed tensorflow using

pip3 install --upgrade tensorflow

Any help is appreciated. Thanks!

  • Probably an issue with the string formatting as here https://stackoverflow.com/a/5082482/4132383 Have you tried to print without formatting as in the softmax tutorial? – sladomic Jan 19 '18 at 08:45
  • Thanks for the suggestion. I changed the print line to `print(accuracy.eval(feed_dict={ x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))` but the behavior doesn't change. – Steven Cao Jan 19 '18 at 16:42

2 Answers2

0

Turns out that the issue was that my computer did not have enough RAM to test with all 10000 test images. Normally, something would throw a MemoryError, but I guess tensor flow suppresses that error.

>>> with sess.as_default(): print(accuracy.eval(feed_dict = {x: mnist.test.image
s[:1000,:], y_: mnist.test.labels[:1000,:], keep_prob: 1.0}))
...
0.442
>>> with sess.as_default(): print(accuracy.eval(feed_dict = {x: mnist.test.image
s[:10000,:], y_: mnist.test.labels[:10000,:], keep_prob: 1.0}))
...

C:\Users\Steven\Documents\Atom\tensorflow-tutorial>

With the second line, Python shoots up to ~2gb of ram before giving up and quitting. Not sure why the error message is suppressed.

-1

its working on my system, can you check it once again.

Successfully downloaded train-images-idx3-ubyte.gz 9912422 bytes.
Extracting /tmp/tensorflow/mnist/input_data/train-images-idx3-ubyte.gz
Successfully downloaded train-labels-idx1-ubyte.gz 28881 bytes.
Extracting /tmp/tensorflow/mnist/input_data/train-labels-idx1-ubyte.gz
Successfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes.
Extracting /tmp/tensorflow/mnist/input_data/t10k-images-idx3-ubyte.gz
Successfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes.
Extracting /tmp/tensorflow/mnist/input_data/t10k-labels-idx1-ubyte.gz
Saving graph to: /tmp/tmpaxAoQ2
2018-01-19 17:02:14.087095: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
step 0, training accuracy 0.06
test accuracy 0.4421
Abhay Singh
  • 152
  • 2
  • 12
  • Thanks for the suggestion, but I'm not sure what you mean by "check it once again." I did try it again and once again it did not work (as described above). – Steven Cao Jan 19 '18 at 16:44
  • I meant to try again, that code is running on my system and it is printing Test accuracy as you can see above. Can you please tell me which version of tensorflow you are using? – Abhay Singh Jan 20 '18 at 17:01
  • `tf.__version__` is 1.4.0 – Steven Cao Jan 21 '18 at 05:36