I have already read the answers to this question.
I have a series of images that contain a single word between 3-10 characters. They are images created on a computer itself, so quality of the images is consistent and the images don't have any noise on them. The fonts are quite large (about 30 pixels in height). This should already be easy enough for tesseract to read accurately but what are some techniques I can use for improving the speed, even if it's only an improvement of a few milliseconds?
The character set consists of uppercase letters only. As the OCR task in this case is very specific, Would it help if I train the tesseract engine with this specific font and font-size or is that overkill?
Edited to include sample
Other than tesseract, are there any other solutions that I can use with C/C++ that can provide better performance? Could it be done faster with OpenCV? Compatibility with Linux is preferred.
Sample