2

I want in the x-ray image below to (by using Python):

  1. identify the rotation of the (imperfect) rectangle block
  2. rotate the image so that it is in vertical (portrait form)
  3. remove by cropping the remaining white space

I guess this partly the reverse of this question where the tools are most likely identical with the addition of a corner detector. I'm not entirely sure of how to best approach this and it seems like a problem that someone has solved.

enter image description here

Community
  • 1
  • 1
Max Gordon
  • 5,367
  • 2
  • 44
  • 70

1 Answers1

4

This can be done using the Python bindings to the OpenCV library. The following code has been adapted from something I had already, so it can probably be further optimised and improved.

The image you have given is not only rotated, but it is also not rectangular, as such the script works in two main stages. Firstly it determines the rotation on the image and rotates and crops it around a minimum rectangle. It then stretches the resulting image to fit the resulting rectangle.

Initial threshold image

enter image description here

Initial bounding rectangle

enter image description here

Rotated and cropped image

enter image description here

Polygon to stretch from

enter image description here

Final cropped image

enter image description here

import numpy as np
import cv2
import math

THRESHOLD = 240

def subimage(image, center, theta, width, height):
    if 45 < theta <= 90:
        theta = theta - 90
        width, height = height, width

    theta *= math.pi / 180 # convert to rad
    v_x = (math.cos(theta), math.sin(theta))
    v_y = (-math.sin(theta), math.cos(theta))
    s_x = center[0] - v_x[0] * (width / 2) - v_y[0] * (height / 2)
    s_y = center[1] - v_x[1] * (width / 2) - v_y[1] * (height / 2)
    mapping = np.array([[v_x[0],v_y[0], s_x], [v_x[1],v_y[1], s_y]])
    return cv2.warpAffine(image, mapping, (width, height), flags=cv2.WARP_INVERSE_MAP, borderMode=cv2.BORDER_REPLICATE)

def auto_crop(image_source):
    # First slightly crop edge - some images had a rogue 2 pixel black edge on one side
    init_crop = 5
    h, w = image_source.shape[:2]
    image_source = image_source[init_crop:init_crop+(h-init_crop*2), init_crop:init_crop+(w-init_crop*2)]

    # Add back a white border
    image_source = cv2.copyMakeBorder(image_source, 5,5,5,5, cv2.BORDER_CONSTANT, value=(255,255,255))

    image_gray = cv2.cvtColor(image_source, cv2.COLOR_BGR2GRAY)
    _, image_thresh = cv2.threshold(image_gray, THRESHOLD, 255, cv2.THRESH_BINARY)
    image_thresh2 = image_thresh.copy()
    image_thresh2 = cv2.Canny(image_thresh2, 100, 100, apertureSize=3)
    points = cv2.findNonZero(image_thresh2)

    centre, dimensions, theta = cv2.minAreaRect(points)
    rect = cv2.minAreaRect(points)

    width = int(dimensions[0])
    height = int(dimensions[1])

    box = cv2.boxPoints(rect)
    box = np.int0(box)

    temp = image_source.copy()
    cv2.drawContours(temp, [box], 0, (255,0,0), 2)

    M = cv2.moments(box)    
    cx = int(M['m10']/M['m00'])
    cy = int(M['m01']/M['m00'])

    image_patch = subimage(image_source, (cx, cy), theta+90, height, width)

    # add back a small border
    image_patch = cv2.copyMakeBorder(image_patch, 1,1,1,1, cv2.BORDER_CONSTANT, value=(255,255,255))

    # Convert image to binary, edge is black. Do edge detection and convert edges to a list of points.
    # Then calculate a minimum set of points that can enclose the points.
    _, image_thresh = cv2.threshold(image_patch, THRESHOLD, 255, 1)
    image_thresh = cv2.Canny(image_thresh, 100, 100, 3)
    points = cv2.findNonZero(image_thresh)
    hull = cv2.convexHull(points)

    # Find min epsilon resulting in exactly 4 points, typically between 7 and 21
    # This is the smallest set of 4 points to enclose the image.
    for epsilon in range(3, 50):
        hull_simple = cv2.approxPolyDP(hull, epsilon, 1)

        if len(hull_simple) == 4:
            break

    hull = hull_simple

    # Find closest fitting image size and warp/crop to fit
    # (ie reduce scaling to a minimum)

    x,y,w,h = cv2.boundingRect(hull)
    target_corners = np.array([[0,0],[w,0],[w,h],[0,h]], np.float32)

    # Sort hull into tl,tr,br,bl order. 
    # n.b. hull is already sorted in clockwise order, we just need to know where top left is.

    source_corners = hull.reshape(-1,2).astype('float32')
    min_dist = 100000
    index = 0

    for n in xrange(len(source_corners)):
        x,y = source_corners[n]
        dist = math.hypot(x,y)

        if dist < min_dist:
            index = n
            min_dist = dist

    # Rotate the array so tl is first
    source_corners = np.roll(source_corners , -(2*index))

    try:
        transform = cv2.getPerspectiveTransform(source_corners, target_corners)
        return cv2.warpPerspective(image_patch, transform, (w,h))

    except:
        print "Warp failure"
        return image_patch


cv2.namedWindow("Result")
image_src = cv2.imread("xray.png")
image_cropped = auto_crop(image_src)
cv2.imwrite("cropped xray.png", image_cropped)
cv2.imshow("Result", image_cropped) 
cv2.waitKey(0)

Thanks go to this StackOverflow answer for the subimage function.

Tested on Python 2.7 and OpenCV 3.0

Community
  • 1
  • 1
Martin Evans
  • 45,791
  • 17
  • 81
  • 97
  • Thanks! Been trying this on a different set of images and I would like to be able to use a numpy-matrix of uint8 as input but I haven't quite figured how to do that in the best way. Secondly I get errors from divisions with zero at `int(M['m10']/M['m00'])` (should be easy to fix), assertion error "src.checkVector(2, CV_32F) == 4 && dst.checkVector(2, CV_32F) == 4" at `transform = cv2.getPerspectiveTransform(source_corners, target_corners)`, and a cryptic "(-215) total >= 0 && (depth == CV_32F || depth == CV_32S)" error on the line `hull = cv2.convexHull(points)`. Any ideas? – Max Gordon Sep 08 '15 at 12:39
  • Yes, it does need testing on a range of files. My other code had trouble with `findContours` returning many contours. I find it easiest to display the image as you go along. Also use drawContours to see where it thinks the edges are. Not sure on the uint8, would need to investigate. – Martin Evans Sep 08 '15 at 12:52
  • I have made some changes. It no longer uses findContours. I also found it worked better if a white border is added prior to stretching. This worked better with my own test images. – Martin Evans Sep 08 '15 at 14:36
  • Seems to work rather nicely, you may want to specify that you need OpenCV >= 3.0.0 for it to work (the boxPoints doesn't seem to exist in the standard Ubuntu version). I've tweaked the code a little (see [here](http://pastebin.com/4CGxtdYq)) to work with black borders instead of white - seems to work. The 2 things left that I would like to add are (1) get it to work with a numpy matrix of uint8 type so that I can plug it directly into the DICOM-workflow, (2) add a *debugging alternative* that shows at the different steps as in your post with the rectangles and polygons. – Max Gordon Sep 09 '15 at 13:42
  • I agree with the debug version, I have a master version with all kinds of image shows, I ended up cutting them all out for the post for clarities sake. Have added version numbers. – Martin Evans Sep 09 '15 at 14:21
  • I'm struggling a little with piecing everything together since I'm new to OpenCV - it seems that the piece with `cv2.moment` can just as well be substituted with the `centre` returned by the `cv2.minAreaRect` or am I missing something? – Max Gordon Sep 16 '15 at 13:18
  • Quite possibly, I would need to check. I might have needed it when using a different function which didn't return the centre. – Martin Evans Sep 16 '15 at 13:19