1

I am currently writing an OpenCV program to recognize and translate Braille characters into Spanish. For the time being, I can recognize the Braille points and retrieve their centroids successfully.

In order to divide the points into characters, I am currently drawing lines along the x and y axis of each braille point, creating some sort of visual matrix:

num_labels, labels, centroid_stats, centroids = cv.connectedComponentsWithStats(canny_edges, connectivity=8)
sorted_centroids = sorted(centroids[1:], key=lambda x: (x[1], x[0]))

copied_image = img_show.copy()

white_canvas = np.ones_like(copied_image)
white_canvas[:] = (255, 255, 255)

lines_canvas = white_canvas.copy()
width, height, _ = copied_image.shape
for centroid in sorted_centroids:
    x, y = int(round(centroid[0])), int(round(centroid[1]))
    cv.circle(copied_image, (x, y), radius=1, color=(255,0,0), thickness=-1)
    
    cv.line(lines_canvas, (x, 0), (x, height*100000), color=(0,0,0), thickness=2)
    cv.line(lines_canvas, (0, y), (width*10000000, y), color=(0,0,0), thickness=2)

plt.imshow(copied_image)
plt.title('Braille con Centroides Marcados')
plt.axis('off')
plt.show()

plt.imshow(lines_canvas)
plt.title('Braille con Líneas de Cuadrícula')
plt.axis('off')
plt.show()

Recognized Centroids Drawn lines

Some points are not completely aligned so I used some thresholding to merge the lines that are very close together:

lines_grayscale = cv.cvtColor(lines_canvas, cv.COLOR_BGR2GRAY)
lines_bilateral_filter = cv.bilateralFilter(lines_grayscale, 13, 75, 75)
lines_adaptive_thres = cv.adaptiveThreshold(lines_bilateral_filter, 255, cv.ADAPTIVE_THRESH_GAUSSIAN_C, cv.THRESH_BINARY, 11, 2)
_, lines_binary_threshold = cv.threshold(lines_adaptive_thres,0,255,cv.THRESH_BINARY+cv.THRESH_OTSU)
lines_bitwise_threshold = cv.bitwise_not(lines_binary_threshold)

plt.imshow(lines_bitwise_threshold, cmap='gray')
plt.title('Braille con Líneas de Cuadrícula Umbralizadas')
plt.axis('off')
plt.show()

Threshold lines

I am trying to use Probabilistic Hough Line transform to recognize the lines and retrieve the intersections, but I am having a bit of a problem:

result = img_show.copy()
lines = cv.HoughLinesP(lines_bitwise_threshold, 1, np.pi / 180, 50, None, 50, 10)
if lines is not None:
        for i in range(0, len(lines)):
            l = lines[i][0]
            cv.line(result, (l[0], l[1]), (l[2], l[3]), (0,0,255), 1, cv.LINE_AA)
    
plt.imshow(result)
plt.title('Braille con Líneas Hough')
plt.axis('off')
plt.show()

Hough Probabilistic Lines

As you can see, it only recognizes horizontal lines. If I reduce the threshold and minLineGap, I get the vertical lines, but the horizontal lines get drawn together. This is how it looks if I reduce minLineGap and threshold to 1: Vertical Lines

A higher minLineGap means less vertical lines are recognized, a higher threshold means vertical lines being drawn like a bunch of horizontal lines:

minLineGap at 50

threshold at 50

I don't know if I should change parameters in the thresholding process or use a completely different process altogether to segment the Braille characters.

Edit:

I have taken another approach by aligning and redrawing contours by vertical alignment. This way I believe it is easier to calculate the segmentation dimensions by calculating distances between glyphs:

colors = defaultdict(lambda: (np.random.randint(0, 220), np.random.randint(0, 220), np.random.randint(0, 220)))
column_coors = defaultdict(lambda: set())
dimensions = set()
for countours in filtered_contours:
    center = cv.moments(countours)
    countour_width, countour_height = cv.boundingRect(countours)[2:]
    
    x , y = int(center["m10"] / center["m00"]), int(center["m01"] / center["m00"])

    already_added = [column_coors.get(x-1), column_coors.get(x+1)]
    if already_added[0] or already_added[1]:
        shift_value = -1 if already_added[0] else 1
        x = x + shift_value
        contours = np.array([[[c[0][0] + shift_value, c[0][1]]] for c in countours])
        
    if (x, y) not in column_coors[x]:
        column_coors[x].add((x, y))
        dimensions.add((countour_width, countour_height))
        
        color = colors[x] if x in colors else colors[x - 1] if x - 1 in colors else colors[x + 1]
        cv.drawContours(rotated_image, [countours], -1, color, 1)
# sort y values (rows) from upper to lower within the same x value (column)
for x in column_coors.keys():
    column_coors[x] = sorted(column_coors[x], key=lambda c: c[1])
> Column coordinates: defaultdict(<function <lambda> at 0x7fdead080a40>,
> {17: [(17, 15), (17, 39)], 29: [(29, 15)], 100: [(100, 17)], 112:
> [(112, 29)], 208: [(208, 17), (208, 28)], 220: [(220, 16), (220, 29)],
> 289: [(289, 16), (289, 40)], 301: [(301, 40)], 372: [(372, 27), (372,
> 39)], 385: [(385, 15)], 455: [(455, 27), (455, 39)], 467: [(467, 15),
> (467, 27)], 537: [(537, 17)], 645: [(645, 15), (645, 27), (645, 39)],
> 727: [(727, 17)], 833: [(833, 17)], 845: [(845, 17)], 917: [(917,
> 17)], 998: [(998, 15), (998, 27)], 1010: [(1010, 15), (1010, 27),
> (1010, 39)], 1081: [(1081, 17)]}) Glyphs Bounding Rect Dimensions:
> {(4, 4), (5, 5), (7, 7), (6, 5), (6, 7), (5, 6), (6, 6)}

Image with aligned contours

Edit 2:

I was asked as to which research papers I am basing my tries in segmenting the Braille characters:

  1. Hugh Lines (Section II A)
  2. Fixed distance segmentation (Section 2.4)
  • hough will only cause you more trouble. -- before you do anything, do literature research. look at how others read braille. – Christoph Rackwitz Jun 12 '23 at 15:55
  • 1
    I checked through some papers and other stackoverflow questions before posting. This method I am making is inspired by an Electronics Engineering paper. The most common methods I see is drawing matrices like what I'm trying to do, or pattern matching. – Alex Serafini Jun 12 '23 at 17:36
  • detail your research in the question. whenever possible, give links or references. – Christoph Rackwitz Jun 13 '23 at 11:09
  • I see components to a solution. -- one component is to identify dots, wherever they are. another is to group points into lines (by colinearity), or into glyphs (by proximity). -- another component is to get the orientation right. when you can't be sure stuff is upright, that ruins a bunch of simplifying assumptions. so, first step should be to either assume the picture is upright, or to make it so. that could be accomplished after finding colinear points, and using that info to rotate the picture, or just the points. – Christoph Rackwitz Jun 13 '23 at 11:13
  • another component is to identify glyph sizes. assuming there is just one glyph size, or maybe two/a few, makes grouping into glyphs easier. a histogram of point distances (nearest 1-2 neighbors) would be useful for this. – Christoph Rackwitz Jun 13 '23 at 11:15
  • As of now I rotate the image and align it using PCA, and align dots vertically when their x values are different by 1 or 2 pixels. This way I have the coordinates of the centers aligned to the common x values. Now I have to look for a way to segment the dots given these new circumstances. – Alex Serafini Jun 20 '23 at 13:52
  • I also looked into the bounding rect of the dots, and for the most part they are exactly the same with a difference of 1 pixel in both width and height. The problem is the spacing is not the same for all Braille writings, which means I cannot use a fixed formula. I'm thinking of taking into account the distances between centers and the x values to determine the size of the segment. – Alex Serafini Jun 20 '23 at 13:54
  • I added the papers I mentioned in the previous comment. – Alex Serafini Jun 20 '23 at 14:57

0 Answers0