0

I've read https://www.pyimagesearch.com/2015/11/02/watershed-opencv/ tutorial which has opened my eyes to this amazing possibility and I am now trying to implement it into my current object tracking program. I am struggling to implement this algorithm into my project because it requires a video stream and it also creates a mask to only see red objects. My main issue with the program is that overlapping objects are counted as one whereas after reading this tutorial I realized theres an algorithm for this but I cannot figure out how to implement it into my project.

Could anyone share some insight and hopefully call me an idiot and open my eyes to the possibility of this.

I appreciate any type of comment. thank you very much

tutorials/research I have followed https://www.pyimagesearch.com/2015/11/02/watershed-opencv/ Image Segmentation using Mean Shift explained https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_watershed/py_watershed.html

# This is my main functionality in my code. And I have no idea where I can implement 
# watershed succesfully because of the color filtering and constant background change


while True:
frame = camera.read()  # read camera

if frame is None:
    print('fail with camera. Is it being used? src # correct?')
    break

frame = imutils.resize(frame, width=400)  # resize frame
height = np.size(frame, 0)  # calculates the height of frame
width = np.size(frame, 1)  # calculates the width of frame
blurred = cv2.GaussianBlur(frame, (21, 21), 0)  # blurring image before hsv applied (less noise)
hsv = cv2.cvtColor(blurred, cv2.COLOR_BGR2HSV)  # creating hsv from blurred frame and converting the bgr to hsv

mask = cv2.inRange(hsv, np.array(args["a"]), np.array(args["b"]))  # mask is setting hsv in range of the lower and upper blue ranges
mask = cv2.erode(mask, None, iterations=2)  # erode for less noise / more white
mask = cv2.dilate(mask, None, iterations=2)  # dilate does similar but makes whiteness thicker
res = cv2.bitwise_and(frame, frame, mask=mask)

contours = cv2.findContours(mask.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)  # find contours of mask
contours = imutils.grab_contours(contours)  # get them

middleLine = (height / 2)  # calculate the middleLine
cv2.line(frame, (0, height // 2), (width, height // 2), (100, 200, 100), 2)  # // = int division | draw the line
rects = []

if len(contours) > 0:  # don't pass in empty contour!!!
    for c in contours:  # loop through them
        if cv2.contourArea(c) < args["e"]:  # not big enough to be considered an object
            continue  # go next
        (x, y, w, h) = cv2.boundingRect(c)  # create rect for the object

I expect to be able to calculate a watershed algorithm in order to be able to uniquely identify objects that overlap on a webcam stream that also has color filtering but tutorials I've followed always leave me on a "cliffhanger" if you will because they use methods that work for images but not video and they dont have color filtering so I can't seem to get the picture of what I need to do to make it work for a videostream and color filtering project.

  • Please clarify what you’re actually asking here. Each frame in a video is an image. Where is the difficulty? – Cris Luengo Jun 01 '19 at 13:53
  • My question is does anyone have any idea how to do this for a video that has color filtering? I personally have no clue and would love to find out as I have been struggling to achieve it on my own – rustyranger Jun 01 '19 at 19:57

0 Answers0