8

I have tried to get the edge of the mask image with the following code:

import numpy as np
from matplotlib import pyplot as plt

img = cv2.imread('ISIC_0000000_segmentation.png',0)
edges = cv2.Canny(img,0,255)

plt.subplot(121), plt.imshow(img, cmap='gray')
plt.title('Original Image'), plt.xticks([]), plt.yticks([])
plt.subplot(122), plt.imshow(edges, cmap='gray')
plt.title('Edge Image'), plt.xticks([]), plt.yticks([])

plt.show

What I get is this:

enter image description here But the edge isn't smooth for some reason.

My plan was to use the edge image to crop the following picture:

Does anyone know how I could make the edge image better and how I could use this to crop the normal image?

EDIT: @Mark Setchell made a good point: If i could use the mask image directly to crop the image that would be great.

Also: It is maybe possible to lay the normal image precisely on the mask image so that the black area on the mask would cover the blue-ish area on the normal picture.

EDIT: @Mark Setchell introduced the idea of multiplying the normale image with the mask image so what the background would result in 0(black) and the rest would keep its color. Would it be a problem when my mask image is .png and my normal picture is .jpg when multiplying?

EDIT: I have written the following code to try to multiply two pictures:

# Importing Image and ImageChops module from PIL package  
from PIL import Image, ImageChops 

# creating a image1 object 
im1 = Image.open("ISIC_0000000.jpg") 

# creating a image2 object 
im2 = Image.open("ISIC_0000000_segmentation.png") 

# applying multiply method 
im3 = ImageChops.multiply(im1, im2) 

im3.show()

But I get the error: ValueError: images do not match

Does anyone know how I could solve this?

nathancy
  • 42,661
  • 14
  • 115
  • 137
Nawin Narain
  • 353
  • 1
  • 4
  • 17
  • I don't understand why you want the edge image at all. Why not mask the image with the mask? – Mark Setchell Dec 12 '19 at 15:35
  • My initial thought was to outline the border so I would be able to still use the background, but I don't really need the background anymore so yes if it was possible to directly use the mask image that would be awesome. The problem is that I don't know how. – Nawin Narain Dec 12 '19 at 15:42
  • Where your mask is black, it is zero. So if you multiply your image by the mask, it will become zero (black). Give some thought to the white parts. – Mark Setchell Dec 12 '19 at 15:48
  • White would give "1" which can be multiplied with every color, and black "0" would dominate every color because everything times 0 results in 0. Is this what you are trying to tell me because this would fix my problem! – Nawin Narain Dec 12 '19 at 15:54
  • That's where I was heading. Your mask may have white=1 or white=255, but yes. – Mark Setchell Dec 12 '19 at 15:56
  • Would it be a problem if my mask image is .png format and my normal picture is .jpg format? – Nawin Narain Dec 12 '19 at 15:57
  • Not normally - they are all just numbers to Numpy. Just be careful if you mix reals and integers, or subtract unsigned numbers, or something is palettised or 16-bit. – Mark Setchell Dec 12 '19 at 16:01
  • Your PNG may be palettised (1 channel) if it only has less than 256 colours. Make sure it is 3-channel with `im2 = Image.open(...).convert('RGB')` – Mark Setchell Dec 12 '19 at 21:47

3 Answers3

4

If I understand correctly, you want to extract the object and remove the background. To do this, you can just do a simple cv2.bitwise_and() with the mask and the original input image.

Does anyone know how I could make the edge image better and how I could use this to crop the normal image?

To extract the background from the image, you don't need an edge image, the thresholded image can be used to remove only the desired parts of the image. You can use the mask image to directly drop the image and remove the background. Other approaches of obtaining a binary mask include using a fixed threshold value, adaptive threshold, or Canny edge detection. Here's a simple example using Otsu's threshold to obtain a binary mask followed by a bitwise-and operation.

Here's the result with the removed background

You can also turn all pixels on the mask to white if you wanted the removed background to be white

Note: Depending on how "smooth" you want the result, you can apply any blur to the image before thresholding to smooth out the edges. This can include averaging, Gaussian, median, or bilaterial filtering.


Code

import cv2

# Load image, grayscale, Otsu's threshold
image = cv2.imread('1.jpg')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]

# Remove background using bitwise-and operation
result = cv2.bitwise_and(image, image, mask=thresh)
result[thresh==0] = [255,255,255] # Turn background white

cv2.imshow('thresh', thresh)
cv2.imshow('result', result)
cv2.waitKey()
nathancy
  • 42,661
  • 14
  • 115
  • 137
1

The detected edge isn't smooth because the actual edge in the image isn't smooth. You can try filtering the original image first with low-pass filters.

If you can use contours, the following will work:

import numpy as np
import cv2
from matplotlib import pyplot as plt

# Read in image
imgRaw = cv2.imread('./Misc/edgesImg.jpg',0)

# Blur image
blurSize = 25
blurredImg = cv2.blur(imgRaw,(blurSize,blurSize))

# Convert to Binary
thrImgRaw, binImgRaw = cv2.threshold(imgRaw, 0, 255, cv2.THRESH_OTSU)
thrImgBlur, binImgBlur = cv2.threshold(blurredImg, 0, 255, cv2.THRESH_OTSU)

# Detect the contours in the image
contoursRaw = cv2.findContours(binImgRaw,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
contoursBlur = cv2.findContours(binImgBlur,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)

# Draw all the contours
contourImgOverRaw = cv2.drawContours(imgRaw, contoursRaw[0], -1, (0,255,0),5)
contourImgOverBlur = cv2.drawContours(blurredImg, contoursBlur[0], -1, (0,255,0),5)

# Plotting
plt.figure()
plt.subplot(121)
plt.imshow(contourImgOverRaw)
plt.title('Raw Edges'), plt.xticks([]), plt.yticks([])
plt.subplot(122)
plt.imshow(contourImgOverBlur)
plt.title('Edges with {}px Blur'.format(blurSize)), plt.xticks([]), plt.yticks([])
plt.show()

25px blur 75px blur

EDIT: Here's more info on getting a mask of an image from contours.

Tee
  • 126
  • 6
-1

You can your morphological operations to get the edge.

Sorry for using MATLAB:

I = imbinarize(rgb2gray(imread('I.png'))); %Load input image, and convert to binary image.

%Erode the image with mask 3x3
J = imerode(I, ones(3));

%Pefrom XOR operation (1 xor 1 = 0, 0 xor 0 = 0, 0 xor 1 = 1, 1 xor 0 = 1)
J = xor(I, J);

%Use "skeleton" operation to make sure eage thikness is 1 pixel.
K = bwskel(J); 

Result:
enter image description here

As Mark mentioned, you don't need the edges for cropping (unless your are using special cropping method that I am not aware of).

Rotem
  • 30,366
  • 4
  • 32
  • 65