2

I'm new with OpenCV and the thing is that i need to get all the contour points. This is easy setting the cv2.RETR_TREE mode in findContours method. The thing is that in this way, returns redundant coordinates. So, for example, in this polygon, i don't want to get the contour points like this:

green colors are contours found (3), red color are points found according to those contours

But like this: enter image description here

So according to the first image, green color are the contours detected with RETR_TREE mode, and points 1-2, 3-5, 4-6, ... are redundant, because they are so close to each other. I need to put together those redundant points into one, and append it in the customContours array. For the moment, i only have the code according for the first picture, setting up the distance between the points and the points coordinates:

def getContours(img, minArea=20000, cThr=[100, 100]):
  font = cv2.FONT_HERSHEY_COMPLEX
  imgColor = img
  imgGray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
  imgBlur = cv2.GaussianBlur(imgGray, (5, 5), 1)
  imgCanny = cv2.Canny(imgBlur, cThr[0], cThr[1])
  kernel = np.ones((5, 5))
  imgDial = cv2.dilate(imgCanny, kernel, iterations=3)
  imgThre = cv2.erode(imgDial, kernel, iterations=2)
  cv2.imshow('threshold', imgThre)
  contours, hierachy = cv2.findContours(imgThre, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)

  customContours = []
  for cnt in contours:
    area = cv2.contourArea(cnt)
    if area > minArea:
        peri = cv2.arcLength(cnt, True)
        approx = cv2.approxPolyDP(cnt, 0.009*peri, True)
        bbox = cv2.boundingRect(approx)
        customContours.append([len(approx), area, approx, bbox, cnt])
        print('points: ', len(approx))
        n = approx.ravel()
        i = 0
        for j in n:
            if i % 2 == 0:
                x = n[i]
                y = n[i + 1]
                string = str(x)+" " + str(y)
                cv2.putText(imgColor, str(i//2+1) + ': ' + string, (x, y), font, 2, (0, 0, 0), 2)
            i = i + 1
  customContours = sorted(customContours, key=lambda x: x[1], reverse=True)
  for cnt in customContours:
    cv2.drawContours(imgColor, [cnt[2]], 0, (0, 0, 255), 5)
  return imgColor, customContours

Could you help me to get the real points regarding to i.e. the second picture?

(EDIT 01/07/21)

I want a generic solution, because the image could be more complex, such as the following picture: enter image description here NOTE: notice that the middle arrow (points 17 and 18) doesn't have a closed area, so isn't a polygon to study. Then, that region is not interested to obtain his points. Also, notice that the order of the points aren't important, but if the entry is the hole image, it should know that there are 4 polygons, so for each polygon points starts with 0, then 1, etc.

Hamsi
  • 41
  • 1
  • 8
  • The reason you are getting an "inner" contour is because of the edge detector - this produces two edges: the first one is the first zero cross, when the pixel intensity along a dimension (either horizontal or vertical) changes from `255` to `0`. The second edge is produced on the second zero cross, from `0` to `255`. Are you detecting edges in order to detect the middle section (points 2 to 5 in your last image) of the triangle? Are you interested only in getting a list of points (1 to 4 in your last image) or do you need the complete contour? – stateMachine Jun 30 '21 at 23:31

1 Answers1

6

Here's my approach. It is mainly morphological-based. It involves convolving the image with a special kernel. This convolution identifies the end-points of the triangle as well as the intersection points where the middle line is present. This will result in a points mask containing the pixel that matches the points you are looking for. After that, we can apply a little bit of morphology to join possible duplicated points. What remains is to get a list of the coordinate of these points for further processing.

These are the steps:

  1. Get a binary image of the input via Otsu's thresholding
  2. Get the skeleton of the binary image
  3. Define the special kernel and convolve the skeleton image
  4. Apply a morphological dilate to join possible duplicated points
  5. Get the centroids of the points and store them in a list

Here's the code:

# Imports:
import numpy as np
import cv2

# image path
path = "D://opencvImages//"
fileName = "triangle.png"

# Reading an image in default mode:
inputImage = cv2.imread(path + fileName)

# Prepare a deep copy for results:
inputImageCopy = inputImage.copy()

# Convert BGR to Grayscale
grayImage = cv2.cvtColor(inputImage, cv2.COLOR_BGR2GRAY)

# Threshold via Otsu:
_, binaryImage = cv2.threshold(grayImage, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)

The first bit computes the binary image. Very straightforward. I'm using this image as base, which is just a cleaned-up version of what you posted without the annotations. This is the resulting binary image:

Now, to perform the convolution we must first get the image "skeleton". The skeleton is a version of the binary image where lines have been normalized to have a width of 1 pixel. This is useful because we can then convolve the image with a 3 x 3 kernel and look for specific pixel patterns. Let's compute the skeleton using OpenCV's extended image processing module:

# Get image skeleton:
skeleton = cv2.ximgproc.thinning(binaryImage, None, 1)

This is the image obtained:

We can now apply the convolution. The approach is based on Mark Setchell's info on this post. The post mainly shows the method for finding end-points of a shape, but I extended it to also identify line intersections, such as the middle portion of the triangle. The main idea is that the convolution yields a very specific value where patterns of black and white pixels are found in the input image. Refer to the post for the theory behind this idea, but here, we are looking for two values: 110 and 40. The first one occurs when an end-point has been found. The second one when a line intersections is found. Let's setup the convolution:

# Threshold the image so that white pixels get a value of 0 and
# black pixels a value of 10:
_, binaryImage = cv2.threshold(skeleton, 128, 10, cv2.THRESH_BINARY)

# Set the convolution kernel:
h = np.array([[1, 1, 1],
              [1, 10, 1],
              [1, 1, 1]])

# Convolve the image with the kernel:
imgFiltered = cv2.filter2D(binaryImage, -1, h)

# Create list of thresholds:
thresh = [110, 40]

The first part is done. We are going to detect end-points and intersections in two separated steps. Each step will produce a partial result, we can OR both results to get a final mask:

# Prepare the final mask of points:
(height, width) = binaryImage.shape
pointsMask = np.zeros((height, width, 1), np.uint8)

# Perform convolution and create points mask:
for t in range(len(thresh)):
    # Get current threshold:
    currentThresh = thresh[t]
    # Locate the threshold in the filtered image:
    tempMat = np.where(imgFiltered == currentThresh, 255, 0)
    # Convert and shape the image to a uint8 height x width x channels
    # numpy array:
    tempMat = tempMat.astype(np.uint8)
    tempMat = tempMat.reshape(height,width,1)
    # Accumulate mask:
    pointsMask = cv2.bitwise_or(pointsMask, tempMat)

This is the final mask of points:

Note that the white pixels are the locations that matched our target patterns. Those are the points we are looking for. As the shape is not a perfect triangle, some points could be duplicated. We can "merge" neighboring blobs by applying a morphological dilation:

# Set kernel (structuring element) size:
kernelSize = 7
# Set operation iterations:
opIterations = 3
# Get the structuring element:
morphKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (kernelSize, kernelSize))

# Perform Dilate:
morphoImage = cv2.morphologyEx(pointsMask, cv2.MORPH_DILATE, morphKernel, None, None, opIterations, cv2.BORDER_REFLECT101)

This is the result:

Very nice, we have now big clusters of pixels (or blobs). To get their coordinates, one possible approach would be to get the bounding rectangles of these contours and compute their centroids:

# Look for the outer contours (no children):
contours, _ = cv2.findContours(morphoImage, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

# Store the points here:
pointsList = []

# Loop through the contours:
for i, c in enumerate(contours):

    # Get the contours bounding rectangle:
    boundRect = cv2.boundingRect(c)

    # Get the centroid of the rectangle:
    cx = int(boundRect[0] + 0.5 * boundRect[2])
    cy = int(boundRect[1] + 0.5 * boundRect[3])

    # Store centroid into list:
    pointsList.append( (cx,cy) )

    # Set centroid circle and text:
    color = (0, 0, 255)
    cv2.circle(inputImageCopy, (cx, cy), 3, color, -1)
    font = cv2.FONT_HERSHEY_COMPLEX
    string = str(cx) + ", " + str(cy)
    cv2.putText(inputImageCopy, str(i) + ':' + string, (cx, cy), font, 0.5, (255, 0, 0), 1)

    # Show image:
    cv2.imshow("Circles", inputImageCopy)
    cv2.waitKey(0)

These are the points located in the original input:

Note also that I've stored their coordinates in the pointsList list:

# Print the list of points:
print(pointsList)

This prints the centroids as the tuple (centroidX, centroidY):

[(717, 971), (22, 960), (183, 587), (568, 586), (388, 98)]
stateMachine
  • 5,227
  • 4
  • 13
  • 29
  • 1
    Hi, this is amazing, but is only supposed to work on this example. I want something generic, i set a triangle with a middle border, but could be something different and complex, check the edit. – Hamsi Jul 01 '21 at 08:27