4

I am trying to convert the white background of the input image into black using python OpenCV.But all the white pixels are not completely getting converted to black. I have attached the input and output images.

Input Image :

Input Image in the window

Output Image:

Output Image in the Window

I have used the following code for conversion:

img[np.where((img==[255,255,255]).all(axis=2))] = [0,0,0];

What should I do?

Jeru Luke
  • 20,118
  • 13
  • 80
  • 87
Gaurav Sawant
  • 63
  • 1
  • 2
  • 7
  • 2
    detect the elliptical region and mask all outside of it – Micka Jul 15 '18 at 10:03
  • 1
    or for example you could first mask a "near white" image and then only use those pixels as background that are connected to the image border. – Micka Jul 15 '18 at 10:22

2 Answers2

8

I know this has already been answered. I have a coded python solution for you.

Firstly I found this thread explaining how to remove white pixels.

The Result:

result

Another Test img:

Edit This is a way better and shorter method. I looked into it after @ZdaR commented on looping over an images matrix.

[Updated Code]

img = cv2.imread("Images/test.pnt")

gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

ret, thresh = cv2.threshold(gray, 240, 255, cv2.THRESH_BINARY)

img[thresh == 255] = 0

kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5, 5))
erosion = cv2.erode(img, kernel, iterations = 1)

cv2.namedWindow('image', cv2.WINDOW_NORMAL)
cv2.imshow("image", erosion)
cv2.waitKey(0)
cv2.destroyAllWindows()

Source

[Old Code]

img = cv2.imread("Images/test.png")

gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

ret, thresh = cv2.threshold(gray, 240, 255, cv2.THRESH_BINARY)

white_px = np.asarray([255, 255, 255])
black_px = np.asarray([0, 0, 0])

(row, col) = thresh.shape
img_array = np.array(img)

for r in range(row):
    for c in range(col):
        px = thresh[r][c]
        if all(px == white_px):
            img_array[r][c] = black_px

kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5, 5))
erosion = cv2.erode(img_array, kernel, iterations = 1)

cv2.namedWindow('image', cv2.WINDOW_NORMAL)
cv2.imshow("image", erosion)
cv2.waitKey(0)
cv2.destroyAllWindows()

Other Sources used: OpenCV Morphological Transformations

Benehiko
  • 462
  • 3
  • 6
  • While using OpenCV its is never a good idea to iterate the image matrix pixel by pixel using nested for loop in Python, using Numpy syntax is always preferable. – ZdaR Jul 15 '18 at 12:24
1

I think not all the "white" pixels in the image are [255,255,255]. Instead, make a threshold. Try [220,220,220] and above and convert them to [0, 0, 0].

  • this might affect the bright yellow region in the centre! – Jeru Luke Jul 15 '18 at 09:56
  • 1
    conversion to hsv may help in segmenting the image – Ankur S Jul 15 '18 at 10:03
  • 1
    As Jeru Luke said, the threshold [220,220,220] is affecting the yellow region in some images. – Gaurav Sawant Jul 15 '18 at 10:09
  • Use hit and trial. Try [230, 230, 230] next, if it still affects yellow, then go for [235, 235, 235] and so on. – Parth Sarthi Sharma Jul 15 '18 at 10:19
  • 1
    This approach does not scale well. Whatever you set the threshold to, *any* occurrence inside the image will also be affected. The question is badly specified – OP does *not* want **all** lighter pixels converted to black, but only those *around* the main image. – Jongware Jul 15 '18 at 10:54