0

I just started using tensorflow and I followed the tutorial example on MNIST dataset. It went well, I got like around 90% accuracy.

But after I replace the next_batch with my own version, the result was way worse than it used to be, usually 50%.

Instead of using the data Tensorflow downloaded and parsed, I download the dataset from this website. Using numpy to get what I want.

df = pd.read_csv('mnist_train.csv', header=None)
X = df.drop(0,1)
Y = df[0]
temp = np.zeros((Y.size, Y.max()+1))
temp[np.arange(Y.size),Y] = 1
np.save('X',X)
np.save('Y',temp)

do the same thing to the test data, then following the tutorial, nothing is changed

x = tf.placeholder(tf.float32, shape=[None, 784])
y_ = tf.placeholder(tf.float32, shape=[None, 10])
X = np.load('X.npy')
Y = np.load('Y.npy')
X_test = np.load('X_test.npy')
Y_test = np.load('Y_test.npy')
BATCHES = 1000


W = tf.Variable(tf.truncated_normal([784,10], stddev=0.1))

# W = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))
y = tf.nn.softmax(tf.matmul(x, W) + b)

cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))
train_step = tf.train.GradientDescentOptimizer(0.05).minimize(cross_entropy)

correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))


sess = tf.InteractiveSession()
tf.global_variables_initializer().run()

right here is my own get_mini_batch, I shuffle the original data's index, then every time I get 100 data out of it, which seems to be like the exact same thing example code does. The only difference is data I throw away some of the data in the tail.

pos = 0
idx = np.arange(X.shape[0])
np.random.shuffle(idx)


for _ in range(1000):
    batch_xs, batch_ys = X[idx[range(pos,pos+BATCHES)],:], Y[idx[range(pos,pos+BATCHES)],]
    if pos+BATCHES >= X.shape[0]:
        pos = 0
        idx = np.arange(X.shape[0])
        np.random.shuffle(idx)
    pos += BATCHES
    sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
print(sess.run(accuracy, feed_dict={x: X_test, y_: Y_test}))

It confuses me why my version is way worse than the tutorial one.

7d9af0aec9
  • 13
  • 3
  • print out X[0] from your data, does it contain numbers 0-255? – lejlot Mar 26 '17 at 01:00
  • yep, X[0] contain the very first data instance, not the headers – 7d9af0aec9 Mar 26 '17 at 01:18
  • 1
    I was not thinking about the header but rather values, MNIST is usually normalised to have values in [0, 1], so if your data is "raw" 0-255 you might want to divide it by 255. before pushing into the network – lejlot Mar 26 '17 at 11:36
  • that is exactly the reason why the performance is so poor. Thank you for your help! – 7d9af0aec9 Mar 26 '17 at 17:48

1 Answers1

0

Like lejilot said, we should normalize the data before we push it into the neural network. See this post

7d9af0aec9
  • 13
  • 3