My project is OCR. I used image_retraining(v0.10.0) to recognize letters.
I train it with pictures size 128x128
After that I use my code to input several letter pictures (1306 pictures) which I segmented from a page of document
The code run so slow.
It took 3 seconds to recognize 1 letter and near 30 minutes to finish 1306 pictures on my laptop.
It took 38 seconds to recognize 1 letter and near 6 hours to finish 1306 pictures on pi 2
I don't know why it run so slow. My C++ code use SVM on QT just took 5 seconds to do that ( It uses picture size 32x24).
So Is it because I use picture too large ? or python run slower than C++
Would you mind giving me advices to make it run faster
Update #1: The picture size is not the big problem. Follow the time_chart. It seems the code slow because of this command
predictions = sess.run(softmax_tensor, \
{'DecodeJpeg/contents:0': image_data})
Does anyone have advices to make the code run faster.