2

I've been recently working at a segmentation process for corneal endothelial cells, and I've found a pretty decent paper that describes ways to perform it with nice results. I have been trying to follow that paper and implement it all using scikit-image and openCV, but I've gotten stucked at the watershed segmentation.

I will briefly describe how is the process supposed to be:

First of all, you have the original endothelial cells image original image

Then, they instruct you to perform a morphological grayscale reconstruction, in order to level a little bit the grayscale of the image (however, they do not explain how to get the markers for the grayscale, so I've been fooling around and tried to get some on my own way)

This is what the reconstructed image was supposed to look like: desired reconstruction

This is what my reconstructed image (lets label it as r) looks like: my reconstruction

The purpose is to use the reconstructed image to get the markers for the watershed segmentation, how do we do that?! We get the original image (lets label it as f), and perform a threshold in (f - r) to extract the h-domes of the cell, i.e., our markers.

This is what the hdomes image was supposed to look like: desired hdomes

This is what my hdomes image looks like: my hdomes

I believe that the hdomes I've got are as good as theirs, so, the final step is to finally perform the watershed segmentation on the original image, using the hdomes we've been working so hard to get!

As input image, we will use the inverted original image, and as markers, our markers.

This is the derised output:

desired output

However, I am only getting a black image, EVERY PIXEL IS BLACK and I have no idea of what's happening... I've also tried using their markers and inverted image, however, also getting black image. The paper I've been using is Luc M. Vincent, Barry R. Masters, "Morphological image processing and network analysis of cornea endothelial cell images", Proc. SPIE 1769

I apologize for the long text, however I really wanted to explain everything in detail of what is my understanding so far, btw, I've tried watershed segmentation from both scikit-image and opencv, both gave me the black image.

Here is the following code that I have been using

img = cv2.imread('input.png',0)
mask = img


marker = cv2.erode(mask, cv2.getStructuringElement(cv2.MORPH_ERODE,(3,3)), iterations = 3)
reconstructedImage = reconstruction(marker, mask)


hdomes = img - reconstructedImage
cell_markers = cv2.threshold(hdomes, 0, 255, cv2.THRESH_BINARY)[1]


inverted = (255 - img)
labels = watershed(inverted, cell_markers)

cv2.imwrite('test.png', labels)


plt.figure()
plt.imshow(labels)
plt.show()

Thank you!

  • 3
    I think this issue should be fairly straightforward to sort out, but you'll have to give us some actual code and data to work with. – Stefan van der Walt Nov 28 '17 at 19:34
  • 1
    So `cv2.watershed()` expects both sure foreground and sure background. You've shown your sure foreground...where's the sure background? I suggest taking a look at my answer [here](https://stackoverflow.com/questions/46036172/irregular-shape-detection-and-measurement-in-python-opencv/46084597#46084597) which is a similar type of problem, and successfully uses the OpenCV watershed. Also see the OpenCV [watershed](https://docs.opencv.org/3.3.1/d3/db4/tutorial_py_watershed.html) tutorial. You can easily use a new thresholded image with just the dark parts of your image as the sure background. – alkasm Nov 29 '17 at 00:56
  • Also I don't follow why the inverted image or markers were used? OpenCV expects markers to be white on black bg, not the other way around (as the desired result shows). And I don't see any benefit to inverting the original image? – alkasm Nov 29 '17 at 01:00
  • This is my result, https://i.stack.imgur.com/OuUkP.png. Referer to https://docs.opencv.org/3.1.0/d3/db4/tutorial_py_watershed.html – Kinght 金 Nov 29 '17 at 07:26
  • I've updated with my code :) Holy shit Silencer, how the hell did you get those results!? Do you have the code? That enhanced img looks so diferent from my grayReconstruction one.. Alexander Reynolds, I've tried with a inverted markers as well... only getting the black image.. Btw, the 'desired results' give us pretty diferent segmentation, it looks like all the cells are bond with eachother, you know? – Bruno Guerra Nov 29 '17 at 16:47
  • @BrunoGuerra would be able to share also the code for the function `reconstruction`? Won't be able to run your code without it :) – NeverNervous Nov 30 '17 at 05:25
  • @NeverNervous, it is from the scikit-image package `skimage.morphology.reconstruction`, it is not a function that I wrote :( hhahaha – Bruno Guerra Nov 30 '17 at 12:15
  • @Silencer, what process have you used to get the enhanced one? Thank you – Bruno Guerra Feb 05 '18 at 14:51

1 Answers1

2

Here's a rough example for the watershed segmentation of your image with scikit-image.

What is missing in your script is calculating the Euclidean distance (see here and here) and extracting the local maxima from it.

Note that the watershed algorithm outputs a piece-wise constant image where pixels in the same regions are assigned the same value. What is shown in your 'desired output' panel (e) are the edges between the regions instead.

import numpy as np
import cv2
import matplotlib.pyplot as plt
from skimage.morphology import watershed
from scipy import ndimage as ndi
from skimage.feature import peak_local_max
from skimage.filters import  threshold_local

img = cv2.imread('input.jpg',0)

'''Adaptive thersholding 
   calculates thresholds in regions of size block_size surrounding each pixel
   to handle the non-uniform background'''
block_size = 41
adaptive_thresh = threshold_local(img, block_size)#, offset=10)
binary_adaptive = img > adaptive_thresh

# Calculate Euclidean distance
distance = ndi.distance_transform_edt(binary_adaptive)

# Find local maxima of the distance map 
local_maxi = peak_local_max(distance, labels=binary_adaptive, footprint=np.ones((3, 3)), indices=False)
# Label the maxima
markers = ndi.label(local_maxi)[0]

''' Watershed algorithm
    The option watershed_line=True leave a one-pixel wide line 
    with label 0 separating the regions obtained by the watershed algorithm '''
labels = watershed(-distance, markers, watershed_line=True)

# Plot the result
plt.imshow(img, cmap='gray')
plt.imshow(labels==0,alpha=.3, cmap='Reds')
plt.show()
NeverNervous
  • 606
  • 4
  • 8