2
def EasyOcrTextbatch(self):
   batchsize=16
   reader = easyocr.Reader(['en'],cudnn_benchmark=True)
   # reader = easyocr.Reader(['en'],gpu=False)
   # dummy = np.zeros([8,512,384,3], dtype=np.uint8)
   # paragraph=reader.readtext_batched(dummy)
   paragraph=reader.readtext_batched(self.imglist,batch_size=batchsize)
   #paragraph = reader.readtext(self.imglist, batch_size=batchsize, paragraph=True, 
   detail=0)
   del reader
   gc.collect()
   torch.cuda.empty_cache()
   return paragraph

The above code does not speed up and to my surprise running it sequentially was faster.Below code is faster than the above one.

def EasyOcrTextSequence(self,):

    reader = easyocr.Reader(['en'])
    #reader = easyocr.Reader(['en'],cudnn_benchmark=False)
    # dummy = np.zeros([32, 256, 256, 1], dtype=np.uint8)
    k=[cv2.cvtColor(cv2.imread(i), cv2.COLOR_BGR2GRAY) for i in self.imglist]
    self.arr = np.array(k)
    j=[reader.readtext(i,paragraph=True,detail=0,batch_size=16) for i in self.arr]
    del reader
    return j

The average time for single images comes at .34 seconds and I want to reduce it a lot.Things I have tried:

  1. Passing batch size while calling as shown in above code.
  2. Tried passing Workers=1 and to my surprise the time taken doubles approx. using workers.
  3. Have tried passing it sequentially and this is the fastest.
  4. I have even tried vstacking 3 images into one and still no luck.
  5. Size of my images are (512,384)

Please help me out if you have any suggestion for using easyocr in inference so that the latency should be lowest (Need to process as many images possible within a second).Also I am open for trying different open-source ocr,my only constraint is it should be very fast with good accuracy.

I am really stuck at this point.Please help. enter image description here

  • [list comprehensions](https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions) could speed it up a lot – Anthony L Aug 21 '22 at 04:57
  • list comprehension while converting to grayscale and calling the ocr? – Sumeet Suman Aug 21 '22 at 05:37
  • Instead of creating array `k` and `j`, they could just comprehended and done as needed. Also I think that it's faster not to manually garbage collect them, unless you need to, because python will do it on it's own more efficiently – Anthony L Aug 21 '22 at 05:43
  • Also, you are enumerating a list, and not even using the index – Anthony L Aug 21 '22 at 05:45

1 Answers1

0

List comprehensions are normally a good performance increaser. They can implemented like this

def EasyOcrTextSequence(self,):
   reader = easyocr.Reader(['en'])
   images = (cv2.imread(img) for img in self.imglist)
   images_grayscaled = (cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) for img in images)
   self.arr = np.array(images_grayscaled)
   return (reader.readtext(i,paragraph=True,detail=0,batch_size=16) for i in self.arr)
Anthony L
  • 112
  • 6
  • I tried this. I do not see any such effect on time. There is something to do with easyocr like why "workers" are not working or if you could help me with a multiprocessing code implementing ocr. I am updating my query time screenshot – Sumeet Suman Aug 21 '22 at 06:23
  • I'm surprised that computation time went up doing that. It's hard to say otherwise without seeing how you are sending tasks to your workers – Anthony L Aug 21 '22 at 06:39
  • I actually tried multiprocessing but the code gets stuck in easycor.readtext method. – Sumeet Suman Aug 21 '22 at 06:43