I am trying to remove the black background from an image using OpenCV, but I am unable to remove the pixels to capture just the main imagery without the black background. Here is the code I am using, along with the original input image.
import numpy as np
import cv2
from matplotlib import pyplot as plt
img = cv2.imread('C:\\Users\\mdl518\\Desktop\\input.png')
mask = np.zeros(img.shape[:2],np.uint8)
bgdModel = np.zeros((1,65),np.float64)
fgdModel = np.zeros((1,65),np.float64)
rect = (0,0,1035,932) # image width/height re-formatted as (x,y,width,height)
cv2.grabCut(img,mask,rect,bgdModel,fgdModel,5,cv2.GC_INIT_WITH_RECT)
mask2 = np.where((mask==2)|(mask==0),0,1).astype('uint8')
img = img*mask2[:,:,np.newaxis]
plt.imshow(img)
plt.savefig('C:\\Users\\mdl518\\Desktop\\output.png')
I am essentially re-formatting the code outlined here (https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_grabcut/py_grabcut.html) that illustrates forefront extraction using OpenCV. However, I am still unable to crop the surrounding background pixels from the input image while preserving the integrity of the image itself in the output image. Is there an easier way to go about this? I also tried to crop/remove the background using cv2.thresholding and contours but still couldn't figure it out. Any assistance is most appreciated!