9

I have a video with 5 oil droplets, and I am trying to use cv2.HoughCircles to find them.

This is my code:

import cv, cv2
import numpy as np

foreground1 = cv2.imread("foreground1.jpg")
vid = cv2.VideoCapture("NB14.avi")

cv2.namedWindow("video")
cv2.namedWindow("canny")
cv2.namedWindow("blur")

while True:
    ret, frame = vid.read()
    subtract1 = cv2.subtract( foreground1, frame)
    framegrey1 = cv2.cvtColor(subtract1, cv.CV_RGB2GRAY)
    blur = cv2.GaussianBlur(framegrey1, (0,0), 2)
    circles =  cv2.HoughCircles(blur, cv2.cv.CV_HOUGH_GRADIENT, 2, 10, np.array([]), 40, 80, 5, 100)
    if circles is not None:
            for c in circles[0]:
                    cv2.circle(frame, (c[0],c[1]), c[2], (0,255,0),2)
    edges = cv2.Canny( blur, 40, 80 )
    cv2.imshow("video", frame)
    cv2.imshow("canny", edges)
    cv2.imshow("blur", blur)
    key = cv2.waitKey(30)

I would say that the canny edge detector looks very good, while the results from the hough transform are very unstable, every frame will provide different results.

Example:

frame1

frame2

frame3

I have been playing with the parameters and honestly I dont know how to get more stable results.

Dr Sokoban
  • 1,638
  • 4
  • 20
  • 34
  • Can you include some of the actual images without any processing applied on them ? This looks very simple to require a Hough transform. – mmgp Jan 30 '13 at 17:33
  • In the images I pasted, the left image is the actual frame without any processing apart from the green circle. Thats how the frame are delivered from the cam. What I want is to find the droplets in every frame, because I need to track them. I am also trying an otsu histogram. – Dr Sokoban Jan 30 '13 at 17:34
  • Yes, that is exactly what I meant. I don't want the green circles. – mmgp Jan 30 '13 at 17:40
  • anywhere I can upload the video? – Dr Sokoban Jan 30 '13 at 17:45
  • Just save some of the frames and include them. Otherwise there are a couple of places for free uploading, pick your favorite. – mmgp Jan 30 '13 at 17:47
  • here I hope you can get it: http://dl.dropbox.com/u/17284290/NB14.avi – Dr Sokoban Jan 30 '13 at 17:48
  • Have you tried using [fitEllipse](http://docs.opencv.org/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?highlight=fitellipse#cv2.fitEllipse)? Your Canny results are clean enough that you might be able to track the droplets that way. – Aurelius Jan 30 '13 at 19:12

1 Answers1

11

Initially I though there would be no overlapping in your oil droplets, but there are. So, Hough might indeed by a good method to use here, but I've had better experience when combining RANSAC with it. I would suggest exploring that, but here I will provide something different from that.

First of all, I couldn't perform the background subtraction that you do since I did not have this "foreground1.jpg" image (so the results can be improved easily). I also didn't care about drawing circles, but you can do that, I simply draw the border of the objects that I consider as a circle.

So, first let us suppose there is no overlapping. Then finding the edges in your image (easy), binarizing the response of the edge detector by Otsu, filling holes, and finally measuring the circularity is enough. Now if there are overlaps, we can use the Watershed transform combined with the Distance transform to separate the droplets. The problem then is that you won't get really circular objects, and I didn't care much about that, but you can adjust for that.

In the following code I also had to use scipy for labeling connected components (important for building the marker for the Watershed), since OpenCV is lacking on that. The code is not exactly short but should be simple to understand. Also, given the full current code, there is no need for the circularity check because after the segmentation by Watershed, only the objects you are after remain. Lastly, there is some simplistic tracking based on the rough distance to the object's center.

import sys
import cv2
import math
import numpy
from scipy.ndimage import label

pi_4 = 4*math.pi

def segment_on_dt(img):
    border = img - cv2.erode(img, None)

    dt = cv2.distanceTransform(255 - img, 2, 3)
    dt = ((dt - dt.min()) / (dt.max() - dt.min()) * 255).astype(numpy.uint8)
    _, dt = cv2.threshold(dt, 100, 255, cv2.THRESH_BINARY)

    lbl, ncc = label(dt)
    lbl[border == 255] = ncc + 1

    lbl = lbl.astype(numpy.int32)
    cv2.watershed(cv2.cvtColor(img, cv2.COLOR_GRAY2RGB), lbl)
    lbl[lbl < 1] = 0
    lbl[lbl > ncc] = 0

    lbl = lbl.astype(numpy.uint8)
    lbl = cv2.erode(lbl, None)
    lbl[lbl != 0] = 255
    return lbl


def find_circles(frame):
    frame_gray = cv2.cvtColor(frame, cv2.COLOR_RGB2GRAY)
    frame_gray = cv2.GaussianBlur(frame_gray, (5, 5), 2)

    edges = frame_gray - cv2.erode(frame_gray, None)
    _, bin_edge = cv2.threshold(edges, 0, 255, cv2.THRESH_OTSU)
    height, width = bin_edge.shape
    mask = numpy.zeros((height+2, width+2), dtype=numpy.uint8)
    cv2.floodFill(bin_edge, mask, (0, 0), 255)

    components = segment_on_dt(bin_edge)

    circles, obj_center = [], []
    contours, _ = cv2.findContours(components,
            cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)

    for c in contours:
        c = c.astype(numpy.int64) # XXX OpenCV bug.
        area = cv2.contourArea(c)
        if 100 < area < 3000:
            arclen = cv2.arcLength(c, True)
            circularity = (pi_4 * area) / (arclen * arclen)
            if circularity > 0.5: # XXX Yes, pretty low threshold.
                circles.append(c)
                box = cv2.boundingRect(c)
                obj_center.append((box[0] + (box[2] / 2), box[1] + (box[3] / 2)))

    return circles, obj_center

def track_center(objcenter, newdata):
    for i in xrange(len(objcenter)):
        ostr, oc = objcenter[i]
        best = min((abs(c[0]-oc[0])**2+abs(c[1]-oc[1])**2, j)
                for j, c in enumerate(newdata))
        j = best[1]
        if i == j:
            objcenter[i] = (ostr, new_center[j])
        else:
            print "Swapping %s <-> %s" % ((i, objcenter[i]), (j, objcenter[j]))
            objcenter[i], objcenter[j] = objcenter[j], objcenter[i]


video = cv2.VideoCapture(sys.argv[1])

obj_center = None
while True:
    ret, frame = video.read()
    if not ret:
        break

    circles, new_center = find_circles(frame)
    if obj_center is None:
        obj_center = [(str(i + 1), c) for i, c in enumerate(new_center)]
    else:
        track_center(obj_center, new_center)

    for i in xrange(len(circles)):
        cv2.drawContours(frame, circles, i, (0, 255, 0))
        cstr, ccenter = obj_center[i]
        cv2.putText(frame, cstr, ccenter, cv2.FONT_HERSHEY_COMPLEX, 0.5,
                (255, 255, 255), 1, cv2.CV_AA)

    cv2.imshow("result", frame)
    cv2.waitKey(10)
    if len(circles[0]) < 5:
        print "lost something"

This works for your entire video, and here are two samples:

enter image description here enter image description here

mmgp
  • 18,901
  • 3
  • 53
  • 80
  • wow thank you very much! I need to read through it and see all the images from every steps to understand it, so I may come back in some hours. I know the basic about RANSAC, can you explain how you can use it with hough to make the results better? – Dr Sokoban Jan 31 '13 at 09:57
  • By the way, I am not searching for circles, I am searching for droplets. So it really doesnt matter if they are a perfect circle or not. – Dr Sokoban Jan 31 '13 at 11:11
  • @DrSokoban RANSAC takes samples before the detection, handling outliers well. Hough instead consider every point, so outliers may shadow the good circles that it could have found otherwise. – mmgp Jan 31 '13 at 13:06
  • care to explain with a bit more of detail? I guess you want to use Ransac to find circles, but I dont get if you do that in the picture space or in the hough space. Also, is there any way I can get the acumm matrix? – Dr Sokoban Jan 31 '13 at 13:18
  • what is cv2.erode(frame_gray, None) doing? with the none as kernel, it seems it is doing the droplets slightly bigger, but why? – Dr Sokoban Jan 31 '13 at 14:34
  • 1
    When you use `None` for the structuring element, OpenCV assumes a 3x3 flat SE. It is expected that this erosion makes the droplets slightly bigger, since they are dark droplets. I used this as a simple form of edge detection, but you can replace it by any other method that gives equally good (or better) edges. The other point about RANSAC: in principle all in RANSAC happens in picture space as you say, it doesn't rely on an accumulator matrix like Hough implementations. For more details I guess you will have to find some paper or book about it, there isn't much space for that in the comments. – mmgp Jan 31 '13 at 15:11
  • Ok I am going to check some papers, but your solution looks perfect. Any idea to do tracking? So far I am thinking just comparing x,y position over time and going for the nearest one to assign the relation between frames. – Dr Sokoban Jan 31 '13 at 16:19
  • @DrSokoban that is probably enough for the problem here. I might update the answer to include this simple tracking, I will test it later. – mmgp Jan 31 '13 at 16:31
  • @DrSokoban I've added the simplistic tracker now. – mmgp Jan 31 '13 at 23:53