1

I was able to extract difference between two images (Original image with unsharp edges). What I am trying to do is to transform image to image with sharp quadrilateral edges (middle step to final image). I am using python 2.7 with cv2 and numpy. My main problem now is, that I have not knowledge how to program it to generate final image (so how to cut unsharp edges and generate final image with sharp edges).

Please, how to make this image manipulation?

Original imageenter image description here

Middle step enter image description here

Final image enter image description here

Additional images: enter image description here enter image description hereenter image description here

peter
  • 4,289
  • 12
  • 44
  • 67
  • Draw a white filled polygon on a numpy.zeros array as a mask or roi and use that to blacken out your image. See https://note.nkmk.me/en/python-opencv-numpy-alpha-blend-mask/ and https://datacarpentry.org/image-processing/04-drawing-bitwise/ – fmw42 Aug 19 '19 at 22:16
  • that could work but I have many images where I am doing difference detection and these images have not the same position and shape (they have in common foursquare, but this foursquare can have different sides on different images). Universal example would be great. – peter Aug 19 '19 at 22:30
  • Sorry, I will have to leave that to someone else, since I am not proficient with OpenCV. If you know the polygon so as to draw it on your image, you can draw it on a black background image to use as a mask. If you do not know the polygon, then you could threshold, do morphology erode and find the contour and then compute some N-sided polygon for it. OpenCV has a method for fitting polygons to some shape. See cv2.approxPolyDP at https://docs.opencv.org/3.1.0/dd/d49/tutorial_py_contour_features.html – fmw42 Aug 19 '19 at 23:14

1 Answers1

0

First we find the smooth contour and then we straighten it into a quadrilateral using approxPolyDP:

import cv2
import numpy as np

img = cv2.imread(r'c:\TEMP\so57564249.jpg')

img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

# threshold and shrink mask
_, thresh = cv2.threshold(img_gray, 10, 255, cv2.THRESH_BINARY)
thresh = cv2.erode(thresh, np.ones((10,10), np.uint8), iterations=1)

# find countour
contours,_ = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnt = contours[0]

# straighten contour to quadrilateral
epsilon = 0.001*cv2.arcLength(cnt,True)
while True:
   epsilon += epsilon
   approx = cv2.approxPolyDP(cnt,epsilon,True)
   if len(approx) <= 4:
       break

# output
thresh[:,:] = 0
thresh = cv2.fillPoly(thresh, [approx], 255)
img = cv2.bitwise_and(img, img, mask=thresh)
cv2.polylines(img, [approx], True, (0,255,0), 2)

enter image description here

The choice of a kernel size of 10 for removing the blurred margin of the image is more or less arbitrary. You may want to adjust it. To visualize its effect you can add

cv2.polylines(img, [cnt], True, (0,0,255))

after the last line to draw the smooth contour of the initially tresholded image.

Stef
  • 28,728
  • 2
  • 24
  • 52
  • thank you, please how should I use it for images I added? (please see additional images in original text) In these images it is not working correctly. – peter Aug 20 '19 at 14:06
  • This is quite a different question, as in the original image the region of interest was already masked. In the new images you'll first have to extract the region of interest, then you can proceed as above. Extraction is difficult as neither color nor value (lightness) alone characterize the region. – Stef Aug 20 '19 at 15:58
  • I posted separate question about what I am trying to achieve here https://stackoverflow.com/questions/57580249/get-coordinates-of-4-edges-on-screen-from-image Could you please check it if you have any idea how to do it? Many thanks million times! – peter Aug 20 '19 at 19:20