0

I have a program that classifies MRIs but the input images are photos taken of the MRIs, so before I use them I need to clean them by cropping them.

The idea is to have an input and output like this: Input photo and output example.

Using the image of the last example, here is the process of my code to determine the outline of the MRI. Where I use the contour to see the maximum values of the top, bottom, right and left to determine the hight and width of the image and crop it.

Output of the code

Now, one problem that I have is that the points not always are in the correct positions, sometimes the script fails to find the MRI when there are similar colors in the background. For example, in this image there is a white background that takes up a big area and the program ends up contouring it instead of the brain lines.

I don't know how to make the program ignore those big yellow areas. I have tried other thresholds and color spaces combinations but the gray color space is the one that works best for most of them. Can anyone suggest me another aproach on how to crop this type of images? or a way to make the program ignore big color areas?

![enter image description here

This is my current code:

from keras.applications.inception_resnet_v2 import conv2d_bn
from google.colab.patches import cv2_imshow

img = cv2.imread(mri_photos[7])
og_img = img.copy()

# Gray color space
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)

#Smoothing the image
gray = cv2.GaussianBlur(gray, (5, 5), 0)

ret , thresh = cv2.threshold(gray, 170, 255, cv2.THRESH_BINARY)

#Remove noise
thresh = cv2.erode(thresh, None, iterations=1)
thresh = cv2.dilate(thresh, None, iterations=5)

# find contours in thresholded image, then grab the largest one
cnts= cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
c = max(cnts, key=cv2.contourArea)

# find the extreme points
extLeft = tuple(c[c[:, :, 0].argmin()][0])
extRight = tuple(c[c[:, :, 0].argmax()][0])
extTop = tuple(c[c[:, :, 1].argmin()][0])
extBot = tuple(c[c[:, :, 1].argmax()][0])

img_cnt = cv2.drawContours(img, [c], -1, (0, 255, 255), 40)

# add extreme points
img_pnt = cv2.circle(img_cnt.copy(), extLeft, 50, (0, 0, 255), -1)
img_pnt = cv2.circle(img_pnt, extRight, 50, (160, 32, 240), -1)
img_pnt = cv2.circle(img_pnt, extTop, 50, (255, 0, 0), -1)
img_pnt = cv2.circle(img_pnt, extBot, 50, (255, 255, 0), -1)

ADD_PIXELS = 0
new_img = og_img[extTop[1]-ADD_PIXELS:extBot[1]+ADD_PIXELS, extLeft[0]-ADD_PIXELS:extRight[0]+ADD_PIXELS].copy()
Christoph Rackwitz
  • 11,317
  • 4
  • 27
  • 36
Franco Hauva
  • 51
  • 1
  • 6
  • 3
    **[1/3]** The problem is that the multiple values of gray are throwing off your binary segmentation. You can't use a fixed threshold in cv2.threshold because the image histogram does not always represent only two distributions (foreground and background) and their "separation" value is not always located at the same pixel intensity. If the images are so complex like in your examples (i.e., a photo of a monitor with multiple windows and perspective distortion you need a more robust approach. – stateMachine May 12 '23 at 04:22
  • 3
    **[2/3]** Some tips: Try to locate the black rectangular part of the MRI. If you can successfully segment it, you can, at the very least, find the four corners, rectify the whole image and crop it. First, resize the image because currently it is gigantic and you don’t need all that data. Plus, the down sampling will help to get rid of the moiré pattern of the monitor, producing a better binary mask. Apply and extra blur filter to smooth the image. Then experiment with Otsu and adaptive thresholding and some morphology to clean the mask. – stateMachine May 12 '23 at 04:23
  • 3
    **[3/3]** [Here’s](https://i.imgur.com/vrefrxG.png) a possible result I obtained applying these steps in an image editor. Once you clearly have segmented the rectangular part, get its convex hull and fit a quadrilateral to it, you will get the coordinates of the four corners. You can then [unwarp](https://i.imgur.com/sickvzV.png) the perspective to get a straight image. Then, [crop](https://i.imgur.com/CQovVdA.png) it. Here's a related question that may help: https://stackoverflow.com/questions/67644977/recognizing-corners-page-with-opencv-partialy-fails/ – stateMachine May 12 '23 at 04:25
  • Why would anybody want to "classify" photos of screens with MRIs on it? There is nearly no information in those photos. – Costantino Grana May 12 '23 at 04:28
  • 2
    Why are you not working with the original MRIs? – fmw42 May 12 '23 at 04:37
  • Thank you, I was looking into adaptive thresholding since I suspected it was the problem. I'm realively new to image processing, so it helps a lot your instructions. – Franco Hauva May 12 '23 at 04:41
  • It is a cell phone application that detects brain tumors, the idea is that the user can take a picture of their MRIs and know the result. I need to clean the images that the user takes so the CNN can process them. – Franco Hauva May 12 '23 at 04:44
  • @stateMachine starting a comment with "[1/3]"... you might as well post all that as an answer :D – Christoph Rackwitz May 12 '23 at 10:07
  • don't overestimate the value of 16 bits. it's still physical measurements. the process of an MRI has limits in spatial resolution and noise. besides, OP wants to throw AI at it. it's going to be close to fortune telling, unless the pathology captured by the image is jumping you in the face. accepting *photos of computer screens* can't make it much worse. – Christoph Rackwitz May 12 '23 at 21:42

0 Answers0