2

In the image I linked below, I need to get all the yellow/green pixels in this rotated rectangle and get rid of the blue background, so that the rectangle's axis are aligned with the x and y axis.

I'm using numpy but don't have a clue what I should do.

I uploaded the array in this drive in case anyone would like to work with the actual array

Original Image print

Thanks for the help in advance.

Hadar
  • 658
  • 4
  • 17
  • if I understand you correctly, you cant display a rotated image in an axis-aligned frame, as the frame would simply use the image's min and max x ,y coordinates for its axis range. if instead you wish to rotate the yellow/green pixels maybe use a rotation matrix – Hadar Sep 28 '21 at 17:24
  • How would you go about finding the angle to rotate? – Bernard Reznik Sep 28 '21 at 17:30
  • 1
    for the specific kind of image, I would binarize it and then use derivatives to find the diagonal bottom-most line. this line has an angle that you would like to rotate (in the opposite direction) – Hadar Sep 28 '21 at 17:36
  • 1
    @HadarSharvit Though at the very least he'd need to check for aligning of the columns/rows in the beginning, otherwise OP might encounter e.g. col0 having height 10, col1 having height 11 and by the rotation either cut off data by cutting off pixels or mangle the data by utilizing anti-aliasing or still have some blue pixels around the border. – Peter Badida Sep 28 '21 at 17:49

2 Answers2

6

I used the same image as user2640045, but different approach.

import numpy as np
import cv2

# load and convert image to grayscale
img = cv2.imread('image.png')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

# binarize image
threshold, binarized_img = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)

# find the largest contour
contours, hierarchy = cv2.findContours(binarized_img, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
c = max(contours, key = cv2.contourArea)

# get size of the rotated rectangle
center, size, angle = cv2.minAreaRect(c)

# get size of the image
h, w, *_ = img.shape

# create a rotation matrix and rotate the image
M = cv2.getRotationMatrix2D(center, angle, 1.0)
rotated_img = cv2.warpAffine(img, M, (w, h))

# crop the image
pad_x = int((w - size[0]) / 2)
pad_y = int((h - size[1]) / 2)

cropped_img = rotated_img[pad_y : pad_y + int(size[1]), pad_x : pad_x + int(size[0]), :]

Result:

enter image description here

dlt_w
  • 368
  • 2
  • 7
  • That was perfect @dlt_w! It was really fast as well. I forgot to point out that my image wasn't exactly RGB: the value of the first 2 bands range from -1 to 1, so opencv had a really hard time to deal with the actual image, rather than the screenshot. The way I managed to make it work was converting a single band to 0-255 and calling it your "gray". After this, it went smoothly. – Bernard Reznik Sep 29 '21 at 12:54
2

I realize there is a allow_pickle=False option in numpys load method but I didn't feel comfortable with unpickling/using data from the internet so I used the small image. After removing the coordinate system and stuff I had

enter image description here

I define two helper methods. One to later rotate the image taken from an other stack overflow thread. See link below. And one to get a mask being one at a specified color and zero otherwise.

import numpy as np
import matplotlib.pyplot as plt
import sympy
import cv2
import functools

color = arr[150,50]

def similar_to_boundary_color(arr, color=tuple(color)):
    mask = functools.reduce(np.logical_and, [np.isclose(arr[:,:,i], color[i]) for i in range(4)])
    return mask

#https://stackoverflow.com/a/9042907/2640045
def rotate_image(image, angle):
    image_center = tuple(np.array(image.shape[1::-1]) / 2)
    rot_mat = cv2.getRotationMatrix2D(image_center, angle, 1.0)
    result = cv2.warpAffine(image, rot_mat, image.shape[1::-1], flags=cv2.INTER_LINEAR)
    return result

Next I calculate the angle to rotate about. I do that by finding the lowest pixel at width 50 and 300. I picked those since they are far enough from the boundary to not be effected by missing corners etc..

i,j = np.where(~similar_to_boundary_color(arr))

slope = (max(i[j == 50])-max(i[j == 300]))/(50-300)
angle = np.arctan(slope)
arr = rotate_image(arr, np.rad2deg(angle))
plt.imshow(arr)

roated image.

One way of doing the cropping is the following. You calculate the mid in height and width. Then you take two slices around the mid say 20 pixels in one direction and to until the mid in the other one. The biggest/smallest index where the pixel is white/background colored is a reasonable point to cut.

i,j = np.where(~(~similar_to_boundary_color(arr) & ~similar_to_boundary_color(arr, (0,0,0,0)))) 

imid, jmid = np.array(arr.shape)[:2]/2
imin = max(i[(i < imid) & (jmid - 10 < j) & (j < jmid + 10)])
imax = min(i[(i > imid) & (jmid - 10 < j) & (j < jmid + 10)])
jmax = min(j[(j > jmid) & (imid - 10 < i) & (i < imid + 10)])
jmin = max(j[(j < jmid) & (imid - 10 < i) & (i < imid + 10)])

arr = arr[imin:imax,jmin:jmax]
plt.imshow(arr)

and the result is: enter image description here

Lukas S
  • 3,212
  • 2
  • 13
  • 25
  • Thanks @user2640045! It was of great help. I managed to get to the second to last image following your code but didn't follow how you ended up with the last image. Did you crop it by hand? I'm not too worried about cutting a little bit of the image on the borders to not end up with the blue region. How would you automate the cropping process(this is important for the purpose of the application)? – Bernard Reznik Sep 28 '21 at 20:08
  • @BernardReznik Well I am not 100% happy with it but I added a cropping. – Lukas S Sep 28 '21 at 23:59
  • Oh, I see. That's a good way to do the padding. I ended up using @dlt_w's approach, but thank you very much for answering! – Bernard Reznik Sep 29 '21 at 12:56