4

I'm trying to remove the background of an image that im trying to train a neural network with. I've had little luck using the method described here (How do I remove the background from this kind of image?), but I have been able to use the Canny Edge Detector to get a semi-good "boundary" of the object in my image. Heres my code I'm running

import cv2 as cv, sys, numpy as np, PIL.Image, matplotlib.pylab as plt

# read command-line arguments
filename = '/Users/colew/Desktop/Good_Cups_Frames/frame_IMG_2057.MOV_0.jpg'

img = cv.imread(filename,0)
edges = cv.Canny(img,25,200)
plt.subplot(121),plt.imshow(img,cmap = 'gray')
plt.title('Original Image'), plt.xticks([]), plt.yticks([])
plt.subplot(122),plt.imshow(edges,cmap = 'gray')
plt.title('Edge Image'), plt.xticks([]), plt.yticks([])
plt.savefig("pic")
plt.show()

and the resulting image(s) we get are

enter image description here

So what I'm looking for exactly is a way for me to get the boundaries of the canny edges and then from there, crop the original image around these boundaries. Thanks!

nathancy
  • 42,661
  • 14
  • 115
  • 137
CEWeinhauer
  • 123
  • 1
  • 9

1 Answers1

8

If I understand correctly, you want to remove the background and extract the object. Instead of using Canny, here's an alternative approach. Since you didn't provide an original image, I screenshotted your image to use as input. In general, there are several ways to obtain a binary image for boundary extraction. They include regular thresholding, Otsu's thresholding, Adaptive thresholding, and Canny edge detection. In this case, Otsu's is probably the best since there is background noise


First we convert the image to grayscale then perform Otsu's threshold to obtain a binary image

There are unwanted sections so to remove them, we perform a morph open to separate the joints

Now that the joints are separated, we find contours and filter using contour area. We extract the largest contour which represents the desired object then draw this contour onto a mask

We're almost there but there are imperfections so we morph close to fill the holes

Next we bitwise-and with the original image

Finally to get the desired result, we color in all black pixels on the mask to white

From here you could use Numpy slicing to extract the ROI but I'm not completely sure what you were trying to do. I'll leave that up to you

import cv2
import numpy as np

image = cv2.imread("1.png")
original = image.copy()
mask = np.zeros(image.shape, dtype=np.uint8)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (5,5))
opening = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel, iterations=3)

cnts = cv2.findContours(opening, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
cnts = sorted(cnts, key=cv2.contourArea, reverse=True)
for c in cnts:
    cv2.drawContours(mask, [c], -1, (255,255,255), -1)
    break

close = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel, iterations=4)
close = cv2.cvtColor(close, cv2.COLOR_BGR2GRAY)
result = cv2.bitwise_and(original, original, mask=close)
result[close==0] = (255,255,255)

cv2.imshow('result', result)
cv2.waitKey()
nathancy
  • 42,661
  • 14
  • 115
  • 137
  • Wow, thanks so much. Is there some way to get a little more of the background gone, when I run it I get a little bit, im assuming because the glare of the image is so close to the color of the background – CEWeinhauer Oct 30 '19 at 15:01
  • You can experiment with the kernel size and the number of iterations. Increasing the kernel size and number of iterations will remove more and will result in a smaller mask – nathancy Oct 30 '19 at 20:20