0

I use my own simulation of the images as a dataset, 0-9, A-Z, nac :

  • a total of 37 categories,
  • there are fifteen kinds of fonts,
  • each font 1000 words per character,
  • a total of 509000 pictures (lack of some characters in some fonts) ,
  • Of which 70% as training set, 30% as testing set.

The size of the figure is 28x28 grayscale, black background and white word.

With tensorflow mnist that handwritten recognition of the demo network (2 layers conv). Use the tf.nn.softmax_cross_entropy_with_logits to count loss.

As shown in the figure, respectively, 10000 and 20000 iterations of the results, why is there such a strange situation? accuracy suddenly fall (regularly)

iteration 10000

iteration 10000

iteration 20000

iteration 20000

Meloman
  • 3,558
  • 3
  • 41
  • 51
ziyi lin
  • 1
  • 2
  • Have you randomised the order of the data? If you're using batch training then it could be that the model only learns (for example) the A-Z and not the numbers. Each time you cycle through the numbers it performs badly but never has a chance to really learn then since a few batches later it's back on letters. – user6916458 Oct 04 '17 at 00:58
  • Thank you. It maybe a reason. I will have a try to do some modification. – ziyi lin Oct 04 '17 at 02:12

1 Answers1

0

I think this is related to my question:

Loss increases after restoring checkpoint

Take a look at this Tensorboard chart

On my side, every time the model is restored from a checkpoint, I have a drop in performance.