0

I am working on an algorithm to match the centroids of bacteria using computer vision.

As I'm an undergraduate and beginner to computer vision, I do not have code specifically for this problem. Just to provide some background, I'm using the following functions in my GUI.

The 'bact' variable refers to Bacteria objects, which stores each bacteria's ID, position, etc.

 def identify_fluor(img, frame: int):

    darkBlue = (139, 0, 0)

    for bact in fluor_at_frame(frame):
    
        pos = tuple([int(coord) for coord in bact.position[frame]])
        img = cv2.circle(img, pos, 5, darkBlue, -1)

    return img
 def identify_bright(img, frame: int):

    darkRed = (0, 0, 139)

    for bact in bright_at_frame(frame):

        pos = tuple([int(coord) for coord in bact.position[frame]])
        img = cv2.circle(img, pos, 5, darkRed, -1)

    return img

These centroids are found using the best software available in current image-processing literature. As you can notice, the processing images on the right (bright-field) is significantly underdeveloped and is a significant hurdle and nuisance for bacteriology researchers.

We need these images on the right to be processed because they have a significantly greater sampling rate of images (1 second [Right] vs. 11 seconds [Left]). The fluorescence images (left) accumulate chemical damage when sampled too frequently, losing their florescence.

These are some instances when the images align perfectly:

Sample 1 of Bacteria Match:

Sample 2 of Bacteria Match:

Sample 3 of Bacteria Match:

In these cases, the images on the right are at an intermediate stage before reaching the next aligned image.

Sample 4 of Bacteria Match

Sample 5 of Bacteria Match

Sample 6 of Bacteria Match

Bright-Field Images

Sample 1 of Bright-Field Sample 1 of Bright-Field

Sample 2 of Bright-Field Sample 2 of Bright-Field

Sample 3 of Bright-Field Sample 3 of Bright-Field

Additional Links

Sample 4 of Bright-Field

Sample 5 of Bright-Field

Sample 6 of Bright-Field

Sample 7 of Bright-Field

Sample 8 of Bright-Field

Sample 9 of Bright-Field

Note: This is not homework. I am doing a research project trying to gain information on the temporal dynamics of bacteria. I am trying to achieve a working solution on one of the samples of the images.

Edit #1: For clarification, I am trying to find the centroids of the bacteria on the right using the bacteria on the left.

Edit #2: I am not looking to match the images by applying a linear transformation. A computer vision algorithm is sought.

Edit #3: Additional bright-field images have been added separately for testing purposes.

halfer
  • 19,824
  • 17
  • 99
  • 186
Raiyan Chowdhury
  • 281
  • 2
  • 20
  • Can you provide some images.? – Rahul Kedia Jul 19 '20 at 16:37
  • I've added them. – Raiyan Chowdhury Jul 19 '20 at 16:40
  • take a look at https://stackoverflow.com/questions/62969818/finding-each-centroid-of-multiple-connected-objects/62970339#62970339 – L.Grozinger Jul 19 '20 at 16:52
  • 3
    Also, what do you mean by "match the centroids", do you want to draw a line between them, find distance or anything else? – Rahul Kedia Jul 19 '20 at 16:56
  • check this question as an example: https://stackoverflow.com/questions/52546428/how-to-detect-object-position-in-image-in-tensorflow – Mehdi Zare Jul 19 '20 at 16:59
  • It seems like the bacteria are always in a central band in the picture. But the band is not always in the same spot. If it was in the same place every time, you could simply crop the picture. But you can probably use [Hough lines](https://docs.opencv.org/master/d6/d10/tutorial_py_houghlines.html) to find the central band. Then crop. Then use thresholding to turn the background white (bacteria are already dark). Then use findContours. OpenCV has great [Python tutorials](https://docs.opencv.org/master/d6/d00/tutorial_py_root.html) – bfris Jul 19 '20 at 17:15
  • Don't you have to find the centroids before matching them (which would be your real question), or are they given ? –  Jul 19 '20 at 17:21
  • To clarify, I actually need to find the centroids in the right image. I can match between the two once found through an algorithm I’ve created. – Raiyan Chowdhury Jul 19 '20 at 17:31
  • If this changes anything, segmentation of the images on the right (known as Bright-field) is an open problem. For this reason, I'm using the images on the left to first find the centroids of the bacteria on the right. I've tried FindContours unsuccessfully and doubt it would work as this method has been tried by a number of researchers in image processing. – Raiyan Chowdhury Jul 19 '20 at 19:23
  • I was expecting to have to use some software such as SIFT/SURF to match the points from the images on the left onto the right. However, this no longer appears to be available for Python. – Raiyan Chowdhury Jul 19 '20 at 19:26
  • SIFT/SURF will soon be available as their patents have expired. ORB is a pretty good alternative to those algos. However, I don't think there is enough structure in these images to warrant feature detection. Once you get the image pre-processed, findContours should be able to find the bacteria. [This answer outlines how to find centroid](https://stackoverflow.com/a/9058880/9705687) from contours. – bfris Jul 20 '20 at 04:41
  • @bfris Thanks for this, I'll try it out. – Raiyan Chowdhury Jul 23 '20 at 01:55
  • @RaiyanChowdhury, can you please provide the image on the right separately, as I have an approach which can work directly on the image on the right but I have to try it out. – Rahul Kedia Jul 23 '20 at 05:21
  • @RahulKedia Added. – Raiyan Chowdhury Jul 23 '20 at 09:10

2 Answers2

2

My approach works directly on the right image.

The code is shared below and explained with comments:

I am creating a function at the beginning which erodes and dilates the image with a circular kernel, specified number of times.

kernel = np.array([[0, 0, 1, 0, 0], 
                   [0, 1, 1, 1, 0], 
                   [1, 1, 1, 1, 1], 
                   [0, 1, 1, 1, 0], 
                   [0, 0, 1, 0, 0]], dtype=np.uint8)
def e_d(image, it):
    image = cv2.erode(image, kernel, iterations=it)
    image = cv2.dilate(image, kernel, iterations=it)
    return image

Note: Image on the right is read in the grayscale format in the variable "img".

# Applying adaptive mean thresholding
th = cv2.adaptiveThreshold(img,255,cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV,11,2)
# Removing small noise
th = e_d(th.copy(), 1)

# Finding contours with RETR_EXTERNAL flag and removing undesired contours and 
# drawing them on a new image.
cnt, hie = cv2.findContours(th, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
cntImg = th.copy()
for contour in cnt:
    x,y,w,h = cv2.boundingRect(contour)
    # Eliminating the contour if its width is more than half of image width
    # (bacteria will not be that big).
    if w > img.shape[1]/2:      
        continue
    cntImg = cv2.drawContours(cntImg, [cv2.convexHull(contour)], -1, 255, -1)

# Removing almost all the remaining noise. 
# (Some big circular noise will remain along with bacteria contours)
cntImg = e_d(cntImg, 5)


# Finding new filtered contours again
cnt2, hie2 = cv2.findContours(cntImg, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)

# Now eliminating circular type noise contours by comparing each contour's 
# extent of overlap with its enclosing circle.
finalContours = []      # This will contain the final bacteria contours
for contour in cnt2:
    # Finding minimum enclosing circle
    (x,y),radius = cv2.minEnclosingCircle(contour)
    center = (int(x),int(y))
    radius = int(radius)

    # creating a image with only this circle drawn on it(filled with white colour)
    circleImg = np.zeros(img.shape, dtype=np.uint8)
    circleImg = cv2.circle(circleImg, center, radius, 255, -1)

    # creating a image with only the contour drawn on it(filled with white colour)    
    contourImg = np.zeros(img.shape, dtype=np.uint8)
    contourImg = cv2.drawContours(contourImg, [contour], -1, 255, -1)

    # White pixels not common in both contour and circle will remain white 
    # else will become black.
    union_inter = cv2.bitwise_xor(circleImg, contourImg)
    
    # Finding ratio of the extent of overlap of contour to its enclosing circle. 
    # Smaller the ratio, more circular the contour.
    ratio = np.sum(union_inter == 255) / np.sum(circleImg == 255)
    
    # Storing only non circular contours(bacteria)
    if ratio > 0.55:
        finalContours.append(contour)

finalContours = np.asarray(finalContours)


# Finding center of bacteria and showing it.
bacteriaImg = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)

for bacteria in finalContours:
    M = cv2.moments(bacteria)
    cx = int(M['m10']/M['m00'])
    cy = int(M['m01']/M['m00'])

    bacteriaImg = cv2.circle(bacteriaImg, (cx, cy), 5, (0, 0, 255), -1)
    
cv2.imshow("bacteriaImg", bacteriaImg)
cv2.waitKey(0)

NOTE: I am only taking the image on the right and my image's size is (221, 828). If your input image is smaller or bigger than this, adjust the value of the number of iterations of erosion and dilation for removing noise accordingly to get good results.

Here are the output images:

1 2 3

Also, as you can see in the third image, the leftmost bacteria, its center is marked not exactly at the center. This is happening because, in the code, I have used convex hull of the contours in one place. You can solve this by keeping a track of all the contours and then at the end, take the center of the initial contour.

I am sure that this code can also be modified and made better but this is what I could think of right now. Any suggestions are most welcomed.

marc_s
  • 732,580
  • 175
  • 1,330
  • 1,459
Rahul Kedia
  • 1,400
  • 6
  • 18
  • `e_d()` is called an opening. – Cris Luengo Jul 23 '20 at 14:11
  • @CrisLuengo, Yeah, it is morphological opening, but I prefer to use it this way so that I have full control over the process. – Rahul Kedia Jul 23 '20 at 14:17
  • Brilliant answer. I think in the third image you *have* found the mathematical center. However, a human looking at the picture would always place the center inside the bacterium. Since the axis of the bacterium is generally left and right (in the x direction), you could get the "human" center by using the mathematical x center and for the y center, calculate a local center right where the x is. – bfris Jul 23 '20 at 14:54
  • This is an excellent solution, thanks very much Rahul! Can I request anyone working on the problem test images 7-9 with this algorithm? It works excellent for the beginning images but gradually breaks down, I'd like to see if there's a way to fix this. – Raiyan Chowdhury Jul 23 '20 at 20:00
  • @RaiyanChowdhury, Stack Overflow is not a free code writing service. You are expected to try to write the code yourself. RahulKedia has given a very good answer. – bfris Jul 24 '20 at 03:37
  • Right, of course. As this solution solved the initial problem and has been a tremendous help, I'll mark the problem resolved. Much appreciated Rahul. – Raiyan Chowdhury Jul 24 '20 at 08:46
0

This seems to be an easy calibration problem.

Find two corresponding points left and right (i.e. the same points in the real world). If your setup is fixed, you can do that "manually" and once for all. You may have to add markers for this purpose (or use two distant bacteria centers that you match visually). If the setup is not fixed, add the markers anyway and design them so that they are easy to locate by image processing.

Now you have a simple linear relation between the left and right coordinates by solving

XR = a XL + b

for the two points. Then using one of the points to find c,

YR = a YL + c

holds.

Now knowing a, b, c, every point on the left can be mapped to the right. From your sample images, I have determined that

a ~ 1.128
b ~ 773
c ~ -16

very grossly.


Do not attempt any kind of matching of the shapes, rely on the geometric transformation of the coordinates.

  • Thanks for this. However, the reason I'm looking for an actual computer vision algorithm is because the images on the left have a lesser sampling rate (11 sec VS 1 sec) than the right. The idea is to use the algorithm to find the centroids after there is a slight movement in the right, so we can gain better temporal dynamics of the bacteria. I've posted the images when they match exactly (i.e. the timestamp is an integer multiple of 11). – Raiyan Chowdhury Jul 19 '20 at 23:00
  • @RaiyanChowdhury: you should have said that in the first place. In fact, most of the time you can't do any matching as there is no left image. So what ?!? Please ask the right question. –  Jul 20 '20 at 07:57
  • Hi Yves, I understand your sentiment, however, I'm trying to solve the problem in stages. The final goal is actually to have the right images fully segmented, which remains an open problem in literature. Posting the entire problem is likely unproductive at this stage. I want to find a computer vision algorithm that can match the centroids in the images when they align exactly, then slowly work up from this. – Raiyan Chowdhury Jul 20 '20 at 11:24
  • @RaiyanChowdhury My post fully answers your current question. –  Jul 20 '20 at 11:25
  • @RaiyanChowdhury: are you unable to solve a 2x2 linear system ? –  Jul 20 '20 at 11:30
  • Yves I think I was clear I'm looking for a computer vision algorithm. My sincere apologies if it seemed otherwise, and I had applied your algorithm for the current stage after your response regardless. It definitely helps, however it currently does not solve the problem I have. – Raiyan Chowdhury Jul 20 '20 at 11:36
  • Yves I'm able to solve it through a linear transformation, but again, I respectfully inform you that I explicitly mentioned "computer vision algorithm" in the problem. – Raiyan Chowdhury Jul 20 '20 at 11:39
  • @RaiyanChowdhury: this is what is called an [XY problem](https://en.m.wikipedia.org/wiki/XY_problem). You want to detect and track bacteria in a brightfield image, and believe that this matching will help you get there. But you’re wasting everyone’s time with that because it won’t. For the 1/11 frames that have a perfect fluorescence match it will, and this answer helps you there. For the other 10/11 frames the matching is pointless, because you can have a different number of bacteria there. – Cris Luengo Jul 23 '20 at 14:08
  • @CrisLuengo Yes, this is currently poses a potential issue. However, since there is very little movement between adjacent images I think there's a possibility it may work. We can also try matching adjacent brightfield images for the images that are closer to the center positions, or working with both fluorescent endpoints. – Raiyan Chowdhury Jul 23 '20 at 14:42
  • Unfortunately as I'm an undergraduate I don't have control over the direction of research in the lab. At the end of the day, however, the goal is to determine the centroid data in the brightfield images, it doesn't matter if the fluorescent images help. – Raiyan Chowdhury Jul 23 '20 at 14:53
  • @RaiyanChowdhury: Did you try this one for segmenting and tracking bacteria in brightfield? https://www.researchgate.net/publication/307446006_SuperSegger_Robust_image_segmentation_analysis_and_lineage_tracking_of_bacterial_cells – Cris Luengo Jul 23 '20 at 14:57
  • @CrisLuengo Yep, this is the software I'm using. :) – Raiyan Chowdhury Jul 23 '20 at 14:58