You're using the very basic linear model in the beginners example?
Here's a trick to debug it - watch the cross-entropy as you increase the batch size (the first line is from the example, the second I just added):
cross_entropy = -tf.reduce_sum(y_*tf.log(y))
cross_entropy = tf.Print(cross_entropy, [cross_entropy], "CrossE")
At a batch size of 204, you'll see:
I tensorflow/core/kernels/logging_ops.cc:64] CrossE[92.37558]
I tensorflow/core/kernels/logging_ops.cc:64] CrossE[90.107414]
But at 205, you'll see a sequence like this, from the start:
I tensorflow/core/kernels/logging_ops.cc:64] CrossE[472.02966]
I tensorflow/core/kernels/logging_ops.cc:64] CrossE[475.11697]
I tensorflow/core/kernels/logging_ops.cc:64] CrossE[1418.6655]
I tensorflow/core/kernels/logging_ops.cc:64] CrossE[1546.3833]
I tensorflow/core/kernels/logging_ops.cc:64] CrossE[1684.2932]
I tensorflow/core/kernels/logging_ops.cc:64] CrossE[1420.02]
I tensorflow/core/kernels/logging_ops.cc:64] CrossE[1796.0872]
I tensorflow/core/kernels/logging_ops.cc:64] CrossE[nan]
Ack - NaN's showing up. Basically, the large batch size is creating such a huge gradient that your model is spiraling out of control -- the updates it's applying are too large, and overshooting the direction it should go by a huge margin.
In practice, there are a few ways to fix this. You could reduce the learning rate from .01 to, say, .005, which results in a final accuracy of 0.92.
train_step = tf.train.GradientDescentOptimizer(0.005).minimize(cross_entropy)
Or you could use a more sophisticated optimization algorithm (Adam, Momentum, etc.) that tries to do more to figure out the direction of the gradient. Or you could use a more complex model that has more free parameters across which to disperse that big gradient.