14

So I have trained an object recognition neural network (YOLOv3) to detect bounding boxes around the license plates of car pictures shot at a variety of tilted and straight angles and the network does it pretty reliably. However now I want to extract the the license plate parallelogram from the bounding box that surrounds it utilizing image processing and without having to train another neural network to do so. sample images:

sample images

I have tried performing edge and contour detection using OpenCV built-in functions as in the following minimal code but only managed to succeed on a small subset of images this way:

import cv2
import matplotlib.pyplot as plt
import numpy as np

def auto_canny(image, sigma=0.25):
    # compute the median of the single channel pixel intensities
    v = np.median(image)

    # apply automatic Canny edge detection using the computed median
    lower = int(max(0, (1.0 - sigma) * v))
    upper = int(min(255, (1.0 + sigma) * v))
    edged = cv2.Canny(image, lower, upper)

    # return the edged image
    return edged


# Load the image
orig_img = cv2.imread(input_file)

img = orig_img.copy()

dim1,dim2, _ = img.shape

# Calculate the width and height of the image
img_y = len(img)
img_x = len(img[0])

#Split out each channel
blue, green, red = cv2.split(img)
mn, mx = 220, 350
# Run canny edge detection on each channel

blue_edges = auto_canny(blue)

green_edges = auto_canny(green)

red_edges = auto_canny(red)

# Join edges back into image
edges = blue_edges | green_edges | red_edges

contours, hierarchy = cv2.findContours(edges.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)

cnts=sorted(contours, key = cv2.contourArea, reverse = True)[:20]
hulls = [cv2.convexHull(cnt) for cnt in cnts]
perims = [cv2.arcLength(hull, True) for hull in hulls]
approxes = [cv2.approxPolyDP(hulls[i], 0.02 * perims[i], True) for i in range(len(hulls))]

approx_cnts = sorted(approxes, key = cv2.contourArea, reverse = True)
lengths = [len(cnt) for cnt in approx_cnts]

approx = approx_cnts[lengths.index(4)]

#check the ratio of the detected plate area to the bounding box
if (cv2.contourArea(approx)/(img.shape[0]*img.shape[1]) > .2):
    cv2.drawContours(img, [approx], -1, (0,255,0), 1)

plt.imshow(img);plt.show()

here's some examples results :

(The top row images are the results of edge detection stage)

Successfuls:

successful-examples

Unsuccessfuls:

unsuccessful-examples

Kinda successfuls:

kinda-successful-examples

And the case where no quadrilateral/parallelogram found but the polygon with the highest area found is drawn:

non-quadrilateral-examples

all these results are with the exact same set of parameters (thresholds, ... etc)

I have also tried to apply Hough transform using cv2.HoughLines but i dont know why vertically tilted lines are always missed no matter how low i set the accumulator threshold. Also when i lower the threshold i get these diagonal lines out of nowhere:

hough-transform-examples

and the code i used for drawling Hough lines:

lines = cv2.HoughLines(edges,1,np.pi/180,20)
for i in range(len(lines)):
    for rho,theta in lines[i]:
        a = np.cos(theta)
        b = np.sin(theta)
        x0 = a*rho
        y0 = b*rho
        x1 = int(x0 + 1000*(-b))
        y1 = int(y0 + 1000*(a))
        x2 = int(x0 - 1000*(-b))
        y2 = int(y0 - 1000*(a))

        cv2.line(img,(x1,y1),(x2,y2),(0,0,255),2)
plt.imshow(img);plt.show()

Is it really this hard to achieve a high success rate using only image processing techniques? Of course machine learning would solve this problem like a piece of cake but I think it would be an overkill and i dont have the annotated data for it anyways.

Moalana
  • 421
  • 3
  • 11
  • If you use a limited set of cameras, and have physically access to them, undistorting the images using a calibration might help (so that straight lines in the world, such as the plate edges, appear straight on undistorted pictures). – Gabriel Devillers Apr 10 '19 at 08:03
  • @GabrielDevillers unfortunately the plate images are user uploaded images. no access to the cameras whatsoever. – Moalana Apr 10 '19 at 08:07
  • You clearly need to apply some pre-processing before you apply canny and/or contour detection. Can you upload few original images so I can give it a try? – Rick M. Apr 10 '19 at 08:23
  • @RickM. Exactly, but what kind of preprocessing is the question. I uploaded a few examples here : https://imgur.com/a/IsSYttk Also feel free to upsample the images even though with different resolutions i somehow get different results sometimes. – Moalana Apr 10 '19 at 08:36
  • @Moalana I will give it a try and get back to you asap. – Rick M. Apr 10 '19 at 08:42
  • @Moalana So I tried with some pre-processing using several techniques and the one that really stands out was [CLAHE](https://stackoverflow.com/questions/25008458/how-to-apply-clahe-on-rgb-color-images). This seems to work nicely with the images with reflections and also the blurred ones with the only issue being that the parameters had to be adjusted. The main problem arises due to the extremely low quality of some of your images. I also tried [unsharp mask](https://stackoverflow.com/questions/32454613/python-unsharp-mask) but it doesn't seem to help in these images. – Rick M. Apr 16 '19 at 05:12
  • Since you are already using YOLO, why don't you use Mask R-CNN instead. It'll give you both bounding box around number plate and mask on number plate. You can get the parallelogram using the mask. For training, you can create masks on a few number plates using a tool like [this](http://www.robots.ox.ac.uk/~vgg/software/via/). You can take a look at TensorFlow object detection Mask R-CNN API as well. – Safwan May 15 '19 at 06:12
  • @Safwan because the inference time is very important for the task at hand (near realtime requirement) and Mask R-CNN is too sluggish for that, specially at the resolutions I pass the upsampled input images to YOLOv3. besides I think for a well defined parallelogram-shaped license plate, Mask R-CNN would be an overkill anyway. Add all of that to the small object detection problem with Mask RCNN too. license plates dimensions in my original vehicle images are in the small ~ very-small range) – Moalana May 15 '19 at 10:45

2 Answers2

2

You could use a color filter to detect the desired area.
It seems the boundaries of the license plates are often marked in white. You could detect the white-ish pixels in the image and plot lines between the outer most positions.

The algorithm would look something like:

  1. Specify the RGB values you would like to detect
  2. Detect the positions (x,y) in which these RGB values occur
  3. Identify the top-left, bottom-left, top-right and bottom-right positions
  4. Plot lines between these positions

This color-detection example from PyImagesearch might help you code it up.

Of course, detecting white plates wouldn't work on white cars.

To account for white cars you could check if any white was detected on the borders of the bounding box image you provided. If so, try drawing lines between the outermost blue, red or black pixels (since the license plate letters have this color).

leermeester
  • 365
  • 3
  • 19
0

I would do a white pass threshold filter followed by a blob (ignore all blobs touching edges) to get your licence plate bulk position. In all the photos you posted the licence plate did not touch edge of image and had at least one pixel of outline. To get corners I would do the following:

Take any point in blob and find furthest point in blob from that point. The result should be a corner point. Then take the resulting corner point and find furthest point from that. This should give u two opposite corners. Then find blob point that has the largest combined distance from both points. Repeat one final time for furthest from all 3 points. Now u have all 4 corners. To order them, get the midpoint and create vectors to each corner. Do a clockwise winding.

Let me know if you are still in need of this solution or need better description and I can write and test code on your imgur set. I have high confidence in this strategy based on your sample images.

Sneaky Polar Bear
  • 1,611
  • 2
  • 17
  • 29