1

I am working with videos, that have borders (margins) around them. Some have it along all 4 sides, some along left&right only and some along top&bottom only. Length of these margins is also not fixed. I am extracting frames from these videos, as for example,

enter image description here

and

enter image description here

Both of these contain borders on the top and bottom.

Can anyone please suggest some methods to remove these borders from these images (in Python, preferably). I came across some methods, like this on Stackoverflow, but this deals with an ideal situation where borders are perfectly black (0,0,0). But in my case, they may not be pitch black, and also may contain jittery noises too. Any help/suggestions would be highly appreciated.

Prithwish Jana
  • 317
  • 2
  • 11
  • Post some of your non-ideal images so we can see what issue you have. Are the red boxes on your actual images? What about the green or dark markings? Are they on your images, too? – fmw42 Mar 06 '20 at 22:21
  • Sorry for the confusion caused. I have edited the images. The green and dark markings were not part of the original image, but just some highlight done on my part (to show the black borders) – Prithwish Jana Mar 06 '20 at 22:45

2 Answers2

4

Here is one way to do that in Python/OpenCV.

  • Read the image
  • Convert to grayscale and invert
  • Threshold
  • Apply morphology to remove small black or white regions then invert again
  • Get the contour of the one region
  • Get the bounding box of that contour
  • Use numpy slicing to crop that area of the image to form the resulting image
  • Save the resulting image


import cv2
import numpy as np

# read image
img = cv2.imread('gymnast.png')

# convert to grayscale
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)

# invert gray image
gray = 255 - gray

# gaussian blur
blur = cv2.GaussianBlur(gray, (3,3), 0)

# threshold
thresh = cv2.threshold(blur,236,255,cv2.THRESH_BINARY)[1]

# apply close and open morphology to fill tiny black and white holes
kernel = np.ones((5,5), np.uint8)
thresh = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel)
thresh = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel)

# invert thresh
thresh = 255 -thresh

# get contours (presumably just one around the nonzero pixels) 
# then crop it to bounding rectangle
contours = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]
cntr = contours[0]
x,y,w,h = cv2.boundingRect(cntr)
crop = img[y:y+h, x:x+w]

cv2.imshow("IMAGE", img)
cv2.imshow("THRESH", thresh)
cv2.imshow("CROP", crop)
cv2.waitKey(0)
cv2.destroyAllWindows()

# save cropped image
cv2.imwrite('gymnast_crop.png',crop)
cv2.imwrite('gymnast_crop.png',crop)


Input:

enter image description here


Thresholded and cleaned image:

enter image description here

Cropped Result:

enter image description here

fmw42
  • 46,825
  • 10
  • 62
  • 80
  • 1
    You have to pick the a reasonable threshold for each image. But there is some latitude due to the morphology cleaning. – fmw42 Mar 07 '20 at 22:24
  • Yeah, since you are suggesting morphological opening & closing, I think this nullifies the need for pinpointing an exact threshold...now, even a range of values will serve as the perfect threshold. Even classical ones like Otsu's method, should also serve the purpose well. – Prithwish Jana Mar 07 '20 at 22:32
  • Well that is putting too much on the morphology. And then you have to adjust the morphology to clean even more or less depending upon the threshold. But if it works for your range of images, great. – fmw42 Mar 07 '20 at 22:45
  • Yes, agreed. Thanks again for your help. I will keep these points in mind. – Prithwish Jana Mar 07 '20 at 22:50
1

I figured out a way that is less dependent on a threshold.

here is a summary of the method:

  1. use the Sobel operator on the image and get I_x and I_y (derivative of greyscale image w.r.t the axis)
  2. sum I_x along the y axis and I_y along the x axis
  3. for each list of sums, find the minimum and maximum. those should be the margins
  4. preform validity checks to decide whether to actually output the indices you found in step 3
  5. crop the image using the indices you found.

If we calculate the derivatives of the greyscale image with respect to the horizontal and vertical changes, we should expect a spike where the margin meets the image. I used the sobel operator in order to get an approximation. and got the following images: output for sobel operator for the image Note how bright is the transition between the white part of the flag and the black boarder in the output I_y as it represents the steepest change in greyscale.

Next, I summed each column of I_x and each row of I_y.

average per column of I_x and per row of I_y

This is a sum of a directed derivative so the spikes are signed. This is actually helpful, as we can just take the maximum as the left/top margin and the minimum as the right/bottom one.

the final image is this:

cropped image

note that this image has a black margin at all sides (even a 1 pixel margin at the right side, which is hard to notice) what about images with only partial margins?

This is where this method is still dependent on some threshold, but it is a statistical one. The max/min of the graph is taken as the margin only if it is at least 3 standard deviations larger than the mean (why 3? because of 'the empirical rule' and because it works for most images I tried it with)

for example, if we take this image: margins on top only

we get graphs with only one point being 3 standard diviations away from the mean. average per column of I_x and per row of I_y for partly cropped image

Thus the image is cropped correctly: cropped image

Note that this method might not work as well when there are "fake margins" that you don't want to be removed.

here is the code I used when analyzing this image:

import cv2
import numpy as np

def remove_black_border_by_sobel(image, std_factor=3):
    image_gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
    sobelx = cv2.Sobel(image_gray, cv2.CV_64F, 1, 0, ksize=5)
    sobely = cv2.Sobel(image_gray, cv2.CV_64F, 0, 1, ksize=5)
    fig, ax = plt.subplots(1, 2)
    ax[0].imshow(sobelx, cmap='cool')
    ax[1].imshow(sobely, cmap='cool')
    ax[0].set_title('dirivative in x direction')
    ax[1].set_title('dirivative in y direction')
    plt.show()
    plt.close()

    x_values = np.average(sobelx, axis=0)
    y_values = np.average(sobely, axis=1)
    fig, ax = plt.subplots(1, 2)
    ax[0].plot(x_values)
    ax[0].set_title('average I_x')
    ax[1].plot(y_values)
    ax[1].set_title('average I_y')
    ax[0].set_xlabel('x')
    ax[0].set_ylabel('average dirivative')
    ax[1].set_xlabel('y')
    plt.show()
    plt.close()
    x_mean = np.mean(x_values)
    x_max_index = np.argmax(x_values)
    x_min_index = np.argmin(x_values)
    y_mean = np.mean(y_values)
    y_max_index = np.argmax(y_values)
    y_min_index = np.argmin(y_values)
    # check if the minimum value is very low
    if x_values[x_min_index] < x_mean - std_factor * np.std(x_values):
        x_right = x_min_index
    else:
        x_right = image_gray.shape[1]
    # check if the maximum value is very high
    if x_values[x_max_index] > x_mean + std_factor * np.std(x_values):
        x_left = x_max_index
    else:
        x_left = 0
    # check if the minimum value is very low
    if y_values[y_min_index] < y_mean - std_factor * np.std(y_values):
        y_bottom = y_min_index
    else:
        y_bottom = image_gray.shape[0]
    # check if the maximum value is very high
    if y_values[y_max_index] > y_mean + std_factor * np.std(y_values):
        y_top = y_max_index
    else:
        y_top = 0
    print(x_left, x_right, y_top, y_bottom)
    # check if the values are valid
    if x_left > x_right:
        x_left, x_right = 0, image_gray.shape[1]
    if y_top > y_bottom:
        y_top, y_bottom = 0, image_gray.shape[0]
    print(x_left, x_right, y_top, y_bottom)
    return image[y_top:y_bottom, x_left:x_right]