1

Using Google vision from here I was successfully able to create a client and an image using vision.Client() and client.image(content=data) respectively. And then send my image using image.detect_text(), attempting to read the digits within the image. however Google-vision has been inaccurate and I heard, from this question, that by setting the language to another (non-latin) language would help with this.

But that is where I am stuck, I'm not sure where to set the languageHints, and yes I have seen this link to the documentation of the AnnotateImageRequest, but I am still confused as to where this comes in.

Community
  • 1
  • 1

2 Answers2

6

I am not an expert in this but the following seems to work for me:

First you create an image_context object, as follows:

image_context = types.ImageContext(language_hints =["en"])

Then you call text_detection with the image_context you created as parameter, as follows:

 response = client.text_detection(image=image, image_context=image_context)
Jimmy
  • 209
  • 2
  • 8
0

image_context = vision.ImageContext(language_hints =["en"]) response = client.text_detection(image=image, image_context=image_context)

  • 1
    As it's currently written, your answer might be correct but it could be improved in 2 ways. First and foremost, it helps to format code as code. This makes it more readable. Second, some supporting text might help to let the OP know that you understood the question thouroughly. Hope this helps! – Tim J Jan 04 '23 at 12:08