3

I'm trying to get amount of objects on the frame by finding their contours with OpenCV.

That's a frame after Canny filter applying:

Then I call findContours() method and leave ones which suitable in size. When I overlay them on a frame I've got the following picture.

It can be seen that we've got only full contour objects.

So the question is: How can we artificially make the boundaries of objects holistic?

I tried to use dilate and erode (result of that) but after that borders of objects are glued together and we can't find their contours any more.

AlmostAI
  • 327
  • 4
  • 14

2 Answers2

3

Since the contours are connected together, findContours will detect the connected contours as a single contour instead of individual separated circles. When you have connected contours, a potential approach is to use Watershed to label and detect each contour. Here's the results:

Input image

enter image description here

Output

enter image description here

Code

import cv2
import numpy as np
from skimage.feature import peak_local_max
from skimage.morphology import watershed
from scipy import ndimage

# Load in image, convert to gray scale, and Otsu's threshold
image = cv2.imread('1.png')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]

# Compute Euclidean distance from every binary pixel
# to the nearest zero pixel then find peaks
distance_map = ndimage.distance_transform_edt(thresh)
local_max = peak_local_max(distance_map, indices=False, min_distance=20, labels=thresh)

# Perform connected component analysis then apply Watershed
markers = ndimage.label(local_max, structure=np.ones((3, 3)))[0]
labels = watershed(-distance_map, markers, mask=thresh)

# Iterate through unique labels
for label in np.unique(labels):
    if label == 0:
        continue

    # Create a mask
    mask = np.zeros(gray.shape, dtype="uint8")
    mask[labels == label] = 255

    # Find contours and determine contour area
    cnts = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
    cnts = cnts[0] if len(cnts) == 2 else cnts[1]
    c = max(cnts, key=cv2.contourArea)
    color = list(np.random.random(size=3) * 256) 
    cv2.drawContours(image, [c], -1, color, 4)

cv2.imshow('image', image)
cv2.waitKey()

Here are some other references:

  1. Image segmentation with Watershed Algorithm

  2. Watershed Algorithm: Marker-based Segmentation

  3. How to define the markers for Watershed

  4. Find contours after watershed

nathancy
  • 42,661
  • 14
  • 115
  • 137
  • 2
    Nice references! Can you share the input image above the resulting image? I think it could make it easier for people to appreciate your algorithm. – karlphillip Feb 03 '20 at 23:10
1

It seems like you have a pattern for your objects, and those objects sometimes overlap. I'd suggest you convolve your image with an object pattern and then process the outcoming scores image.

In more detail:

Suppose for simplicity that your initial image has only one channel. and the object you're looking for looks like this:PatternToFind. this is our pattern. say it's size is [W_p,H_p]

First step: construct new image - scores - where each pixel S in scores = probability that this pixel is the pattern center.

One way to do that is: for each pixel P in the original image, "cut" the [W_p,H_p] patch around P ( e.g. img(Rect(P-W_p/2,P-H_p/2,W_p,H_p))), and subtract patch from pattern to find the "distance" between them (e.g. cv::sum(cv::absdiff(patch, pattern)) function in opencv), and save this sum to S.

Another way to do this is: S = P.clone(); pattern = pattern / cv::sum(pattern); and then use cv::filter2D for S with pattern...

Now that you have a scores image, you should filter false positives: 1. take the top 2% of the scores( one way is with cv::calcHist) 2. for each pixel that has a neighbor within [W_p,H_p] with a heigher score - turn this pixel to zero !

Now you should remain with image of zeroes where only pattern centers have some value. Hurray!

If you don't know in advance how an object will look like, you can find one object using contours, then 'cut it out' using convex hull of its contour (+ bounding box), and use it as the convolution kernel for finding the rest.

Shmuel Fine
  • 351
  • 1
  • 5
  • you said "to convolve you image with an object pattern and then process the outcoming scores image". Maybe you can point the necessary methods for that? – AlmostAI Feb 03 '20 at 15:31
  • I've edited my answer with general guidelines. Is it better now? – Shmuel Fine Feb 03 '20 at 16:43