2

I am trying to extract the tiles ( Letters ) placed on a Scrabble Board. The goal is to identify / read all possible words present on the board.

An example image - Scrabble board

Ideally, I would like to find the four corners of the scrabble Board, and apply perspective transform, for further processing.

After Perspective transform - Scrabble board after transform

The algorithm that I am using is as follows -

  1. Apply Adaptive thresholding to the gray scale image of the Scrabble Board. Binary Image

  2. Dilate / Close the image, find the largest contour in the given image, then find the convex hull, and completely fill the area enclosed by the convex hull. Mask Image

  3. Find the boundary points ( contour ) of the resultant image, then apply Contour approximation to get the corner points, then apply perspective transform

Corner Points found - Corner points

This approach works with images like these. But, as you can see, many square boards have a base, which is curved at the top and the bottom. Sometimes, the base is a big circular board. And with these images my approach fails. Example images and outputs -

Board with Circular base:

Board

Points found using above approach:

Corners

I can post more such problematic images, but this image should give you an idea about the problem that I am dealing with. My question is -

How do I find the rectangular board when a circular board is also present in the image?

Some points I would like to state -

  1. I tried using hough lines to detect the lines in the image, find the largest vertical line(s), and then find their intersections to detect the corner points. Unfortunately, because of the tiles, all lines seem to be distorted / disconnected, and hence my attempts have failed.

  2. I have also tried to apply contour approximation to all the contours found in the image ( I was assuming that the large rectangle, too, would be a contour ), but that approach failed as well.

  3. I have implemented the solution in openCV-python. Since the approach is what matters here, and the question was becoming a tad too long, I didn't post the relevant code.

I am willing to share more such problematic images as well, if it is required. Thank you!

EDIT1 @Silencer's answer has been mighty helpful to me for identifying letters in the image, but I want to accurately find the placement of the words in the image. Hence, I feel identifying the rows and columns is necessary, and I can do that only when a perspective transform is applied to the board.

Ganesh Tata
  • 1,118
  • 8
  • 26
  • What do you think about removing non-linear contours? – Dmitrii Z. Dec 25 '17 at 08:51
  • @DmitriiZ. What techniques can I use to do that? I was hoping that there would be some way to remove the curves. Tried using erosion followed by dilation but it just doesn't generalize. There must be some other way to do the same. – Ganesh Tata Dec 25 '17 at 08:53
  • First thing which comes to mind is to do whatever you're doing now, try to approximate it (cv2.approxPolyDP) and see if it is more or less rectangular. If not - delete contour. It should also be doable to detect if line is curve (https://www.mathworks.com/matlabcentral/answers/164349-how-to-calculate-the-curvature-of-a-boundaries-in-binary-images ) – Dmitrii Z. Dec 25 '17 at 08:55
  • I could try and detect the curves, but is there no other way I can take advantage of the rectangular shape of the board to detect the corners? I just want to approximate the corner points of the rectangle. – Ganesh Tata Dec 25 '17 at 09:43

2 Answers2

1

I wrote an answer on MSER text detection:

Trying to Plot OpenCV's MSER regions using matplotlib

The code generate the following results on your images.

enter image description here

enter image description here

You can have a try.

Kinght 金
  • 17,681
  • 4
  • 60
  • 74
  • This does look promising. Thank you! I will give this a shot. – Ganesh Tata Dec 25 '17 at 10:21
  • I just have a small doubt : Is there any specific reason why MSER misses out on the letter "I"? – Ganesh Tata Dec 25 '17 at 11:06
  • 1
    Possible reasons: (1) filtered by `mser.setMinArea/mser.setMaxArea` ; (2) filtered by width/height of `cv2.boundingRect`. You should try do modify the parameters to adjust your image. – Kinght 金 Dec 25 '17 at 11:12
0

I think @silencer has already given quite promising solution.

But to perform perspective transform as you have mentioned that you have already tried with hough lines to find the largest rectangle but it fails because for tiles present.

Given you have large image data set may be more than 1000 images, you can also give a shot to Deep learning based approach where you can train a model with images as input and corresponding rectangle boundary points coordinate as outputs.

flamelite
  • 2,654
  • 3
  • 22
  • 42