3

This is my test photo

enter image description here

I am trying to find the edges of the card. However, as you can see, the edges are somewhat blurry.

To find the edges i first enhance the contrast of the image so hopefully blurry edges will be less blurry and much more easier to find: enter image description here Then i used Gaussian Blur to smooth it a little (I tried removing Gaussian blur, but the the edge detector found to many details in the background + in the card).

Then i used canny with "dynamic thresholds" and got the following result: enter image description here As you can see, i barley found any edges of the card (except the left one, which was easy because of the dark background). Is there a robust (i don't want to "over fit" on this image) method to find straight blurry edges?

Found some suggestions here: Blurry edge detection How to find accurate corner positions of a distorted rectangle from blurry image in python?, but none resulted in satisfying edges.

The full code:

def auto_canny(image, sigma=0.5):
    v = np.median(image)
    lower = int(max(0, (1.0 - sigma) * v))
    upper = int(min(255, (1.0 + sigma) * v))
    return cv2.Canny(image, lower, upper)

def add_contrast(img, contrast_level=8):
    lab = cv2.cvtColor(img, cv2.COLOR_BGR2LAB)

    l, a, b = cv2.split(lab)

    clahe = cv2.createCLAHE(clipLimit=3.0, tileGridSize=(contrast_level, contrast_level))
    cl = clahe.apply(l)

    limg = cv2.merge((cl, a, b))

    final = cv2.cvtColor(limg, cv2.COLOR_LAB2BGR)

    return final

# ------------------------------------------ #
# FIND EDGES
# ------------------------------------------ #
img = add_contrast(img=img, contrast_level=8)

gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
cv2.imshow("gray", gray)

kernel_size = 5
blur_gray = cv2.GaussianBlur(gray, (kernel_size, kernel_size), 0)

edges = auto_canny(image=blur_gray) 

# Show images for testing
cv2.imshow('edges', edges)
cv2.waitKey(0)
cv2.destroyAllWindows()
Alex Goft
  • 360
  • 3
  • 12
  • The bottom part of the ID will be hard to find in one step. You either loosen the thresholds to canny and do the postprocessing of the returned edges, or you write your own way of detecting edges. In some way the detection of the top and left edge should be easier. What you need really is a top left and bottom left corner to find tilt and scale. – Croolman Mar 03 '20 at 10:52
  • 1
    Do the test images have the info redacted out with the red color? or do you added that manually, before posting the image? – stateMachine Mar 04 '20 at 00:37
  • @eldesgraciado i added the red color manually, before posting the image (though did i run the algorithm on this image). – Alex Goft Mar 04 '20 at 08:08

2 Answers2

5

This is also not a complete solution, but if the red parts are problematic, you can first inpaint those parts using cv2.inpaint() function. Then you can apply the rest of your approach to find the card edges.

# create an inpainting mask with "red-enough" pixels
mask = cv2.inRange(img_src_rgb, np.array([200,0,0]), np.array([255,50,50]))
# enlarge the mask to cover the borders
kernel = np.ones((3,3),np.uint8)
mask = cv2.dilate(mask,kernel,iterations = 1)
# inpaint the red parts using Navier-Stokes based approach
img_dst = cv2.inpaint(img_src, mask,50,cv2.INPAINT_NS)
cv2.imshow("no_red", img_dst)

Resulting image is below.

enter image description here

EDIT: Now that we know what you are asking, below is a complete solution.

After the inpainting, you can apply Hough Transform to find the strong straight lines in the image.

gray = cv2.cvtColor(img_dst, cv2.COLOR_RGB2GRAY)
edges = auto_canny(gray) # your auto_canny function, WITHOUT blur
lines = cv2.HoughLines(edges, 1, np.pi/90, 50)
for line in lines:
    rho,theta = line[0]
    a = np.cos(theta)
    b = np.sin(theta)
    x0 = a*rho
    y0 = b*rho
    x1 = int(x0 + 10000*(-b))
    y1 = int(y0 + 10000*(a))
    x2 = int(x0 - 10000*(-b))
    y2 = int(y0 - 10000*(a))
    cv2.line(img_dst,(x1,y1),(x2,y2),(0,255,0),1)

cv2.imwrite('linesDetected.jpg', img_dst)

Again, the resulting lines are below.

enter image description here

ilke444
  • 2,641
  • 1
  • 17
  • 31
  • Thats a great idea! However, that was not my question. I was hoping to find a way to detect blurry edges – Alex Goft Mar 04 '20 at 08:16
  • 2
    In your example, blurry edges are dominated by the red area borders, thus I suggested this solution as a pre-processing step. You should correct your question by mentioning that red parts are not in the original images, and generate an example output based on the original image, so that we can help. – ilke444 Mar 04 '20 at 08:23
  • 2
    @AlexGoft, see the complete answer above. – ilke444 Mar 04 '20 at 09:27
  • Thank you for your great answer and sorry for the inconvenience. Could you please elaborate why use canny without blur? – Alex Goft Mar 04 '20 at 09:29
  • 2
    You actually need edges and corners, and Gaussian blur smooths them, so you lose the card edges too. You may try bilateral filter if you want some edge-aware smoothing. Also, increasing the contrast introduces new edges that contribute more to the noise than to the edges you seek. For this case you should exploit the assumption in your data: the card edges are straight lines. – ilke444 Mar 04 '20 at 09:33
3

You can improve the solution by filling parts of the background using cv2.floodFill.

Enhancing the contrast before finding edges is a nice idea, but it looks like it creates some artifacts that makes finding the edges more difficult.

Here is a code sample:

import numpy as np
import cv2


def auto_canny(image, sigma=0.5):
    v = np.median(image)
    lower = int(max(0, (1.0 - sigma) * v))
    upper = int(min(255, (1.0 + sigma) * v))
    return cv2.Canny(image, lower, upper)


img = cv2.imread('card.png')

h, w = img.shape[0], img.shape[1]

# Seed points for floodFill (use few points at each corner for improving robustness)
seedPoints = ((5, 5), (w-5, 5), (5, h-5), (w-5, h-5), 
              (w//2, 5), (w//2, h-5), (5, h//2), (w-5, h//2), 
              (w//4, 5), (w//4, h-5), (5, h//4),  (w-5, h//4),
              (w*3//4, 5), (w*3//4, h-5), (5, h*3//4),  (w-5, h*3//4))

# Fill parts of the background with black color
for seed in seedPoints:
    cv2.floodFill(img, None, seedPoint=seed, newVal=(0, 0, 0), loDiff=(2, 2, 2), upDiff=(2, 2, 2))

# ------------------------------------------ #
# FIND EDGES
# ------------------------------------------ #
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
cv2.imshow("gray", gray)

kernel_size = 5
blur_gray = cv2.GaussianBlur(gray, (kernel_size, kernel_size), 0)

# Second, process edge detection use Canny.
edges = auto_canny(image=blur_gray)

# Show images for testing
cv2.imshow('edges', edges)
cv2.waitKey(0)
cv2.destroyAllWindows()

Result:
enter image description here

I know it's not a complete solution, but I hope it helps...

Rotem
  • 30,366
  • 4
  • 32
  • 65
  • 1
    Points are just spread evenly along the perimeter of the image (I should have used a loop). – Rotem Mar 04 '20 at 09:10