1

I am trying to detect a magnetic strip of the credit card held in front of the person's face. First, I tried to detect the borders with Canny edge detector. Although there is a clear edge visible, the edge detection fails to detect the discontinued border. Below is the code I ran to get the result:

img = cv2.imread(input_dir + str(f))

gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
gray = cv2.bilateralFilter(gray, 5, 10, 10)
edges = cv2.Canny(gray, 20, 60)

plt.subplot(121), plt.imshow(gray, cmap='gray')
plt.title('Original Image'), plt.xticks([]), plt.yticks([])
plt.subplot(122), plt.imshow(edges, cmap='gray')
plt.title('Edge Image'), plt.xticks([]), plt.yticks([])
plt.show()

enter image description here

Source image:

enter image description here

Desied outcome (region marked with red): enter image description here

I would appreciate any help.

Thanks,

Niko

Niko Gamulin
  • 66,025
  • 95
  • 221
  • 286
  • why? just out of curiosity... – pangyuteng Jun 24 '19 at 01:46
  • 1
    @teng, I need to determine absolute face dimension and use the known object size as a reference. As i wrote in the comment below your answer, in parallel I am collecting the images and testing a CNN with pretrained backbone with custom head. – Niko Gamulin Jun 24 '19 at 08:17

2 Answers2

1

Update :

Performed double thresholding after converting the color space to HSV. See results below :

import cv2
import matplotlib.pyplot as plt
import numpy as np

img = cv2.imread("img.jpg")

hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
lower_grey = np.array([0, 5, 50]) #Lower threshold for grey.
upper_grey = np.array([360, 50, 255]) #Higher threshold for grey.
mask = cv2.inRange(hsv, lower_grey, upper_grey)
img_res = cv2.bitwise_and(img, img, mask = mask)
img_res = cv2.GaussianBlur(img_res,(7,7),0)

edges = cv2.Canny(img_res, 100, 200)

plt.subplot(121), plt.imshow(img_res, cmap='gray')
plt.title('Original Image'), plt.xticks([]), plt.yticks([])
plt.subplot(122), plt.imshow(edges, cmap='gray')
plt.title('Edge Image'), plt.xticks([]), plt.yticks([])
plt.show()

**enter image description here**

Original :

Firstly, you can transform the color space to HSV and then use a Gaussian Blur on top of it.enter image description here. Here is the code I used :

import cv2
import matplotlib.pyplot as plt

img = cv2.imread("img.jpg")

gray = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
gray = cv2.GaussianBlur(gray[:,:,1],(7,7),0)
edges = cv2.Canny(gray, 20, 60)

plt.subplot(121), plt.imshow(gray, cmap='gray')
plt.title('Original Image'), plt.xticks([]), plt.yticks([])
plt.subplot(122), plt.imshow(edges, cmap='gray')
plt.title('Edge Image'), plt.xticks([]), plt.yticks([])
plt.show()
user2125722
  • 1,289
  • 3
  • 18
  • 29
0

my solution is constrained to solely this image, where the card is held up close, and horizontally.

!wget https://i.stack.imgur.com/46VsT.jpg

read in image.

import matplotlib.pyplot as plt
import numpy as np
import imageio

# rgb to gray https://stackoverflow.com/a/51571053/868736
im = imageio.imread('46VsT.jpg')
gray = lambda rgb : np.dot(rgb[... , :3] , [0.299 , 0.587, 0.114]) 
gray = gray(im)  
image = np.array(gray)
plt.imshow(image,cmap='gray')

enter image description here

import numpy as np
import skimage
from skimage import feature
from skimage.transform import probabilistic_hough_line
import matplotlib.pyplot as plt
from matplotlib import cm

find horizontal edges with some constraints.

edges = np.abs(skimage.filters.sobel_h(image))
edges = feature.canny(edges,1,100,200)
plt.imshow(edges,cmap='gray')

enter image description here

find horizontal lines with more constraints.

# https://scikit-image.org/docs/dev/auto_examples/edges/plot_line_hough_transform.html
lines = probabilistic_hough_line(edges, threshold=1, line_length=200,line_gap=100)
plt.imshow(edges * 0,cmap='gray')
for line in lines:
    p0, p1 = line
    plt.plot((p0[0], p1[0]), (p0[1], p1[1]),color='red')

enter image description here

use detected lines to obtain region of interest.

# https://scikit-image.org/docs/dev/auto_examples/edges/plot_convex_hull.html
from skimage.morphology import convex_hull_image
canvas = edges*0
for line in lines:
    p0, p1 = line
    canvas[p0[1],p0[0]]=1
    canvas[p1[1],p1[0]]=1
chull = convex_hull_image(canvas)
plt.imshow(chull,cmap='gray')

enter image description here

... but why? ;)

I doubt the above solution would actually work "in production"... if you have the resource, I would go for an modified YOLO model, and spend the resource on building a good dataset for training (emphasis on "GOOD" dataset, but you got to define what good is first...), see this video for some inspiration, https://www.youtube.com/watch?v=pnntrewH0xg

pangyuteng
  • 1,749
  • 14
  • 29
  • if you are still going to go for conventional image processing solutions, these 2 solutions for coca cola can recognition may be worth considering (sift style/ signature detection, assuming there are similarities on all the card with magnetic stripes you need to detect.) https://stackoverflow.com/a/10169025/868736 https://stackoverflow.com/a/10199154/868736 – pangyuteng Jun 24 '19 at 02:42
  • 1
    thanks for the answer. Actually in parallel, I have built a modified yolo version with 4 outputs that indicate the corners. With current amount of images (approx 200) it doesn't achieve satisfactory results but it seems that once having enough data to train, the model will be way more robust. I used pretrained resnet as a backbone and tried to train the custom head. – Niko Gamulin Jun 24 '19 at 08:15
  • Cool!! Thanks for sharing. – pangyuteng Jun 24 '19 at 14:15