0

I am trying to measure average thickness of segmented and labeled image. Since I was not successful to make this work with opencv, per suggestion by @CrisLuengo, I switch to using diplib. I found a good example of measuring thickness of part here Measuring the distance between two lines using DipLib (PyDIP)

Issue: I can make this code work and find an estimation for one segment of my image, but the code is returning error for the second segment. Below is what I've done:

After importing image and doing some pre-processing I ended up having this image:

pre-processed image

Here I am interested in 3 average thicknesses:

  1. thickness of long narrow white part on top
  2. thickness of black region in middle
  3. thickness of thick white part in bottom

in order to do this first I labeled the image and chose top 2 big areas (which is top and bottom white part of image)

label_image=measure.label(opening1, connectivity=opening1.ndim)
props= measure.regionprops_table (label_image, properties=['label', "area", "coords"])

slc=label_image
rps=regionprops(slc)
areas=[r.area for r in rps]
id=np.argsort(props["area"])[::-1]
new_slc=np.zeros_like(slc)
for i in id[0:2]:
    new_slc[tuple(rps[i].coords.T)]=i+1

which will result in labeled image with 2 labels:

enter image description here

Since, it looks like that the approach introduced here Measuring the distance between two lines using DipLib (PyDIP) only works on one image, I separated my section. In other words, I am going to focus on each labeled part separately:

first thick white part :

slc=label_image
rps=regionprops(slc)
areas=[r.area for r in rps]
id=np.argsort(props["area"])[::-1]
new_slc=np.zeros_like(slc)
for i in id[0:1]:
    new_slc[tuple(rps[i].coords.T)]=i+1

enter image description here

then I ran the code and it return 15.33 μm thickness (compared to the actual thickness of 14.66 μm this is a good estimation). it also returned this image

enter image description here

I am not sure how to interpret this image but it looks like the algorithm is trying to show fitting lines to boundaries now I want to do the same thing for thin white top part. To do this first I choose the top white part:

rps=regionprops(slc)
areas=[r.area for r in rps]

id=np.argsort(props["area"])[::-1]
new_slc=np.zeros_like(slc)
for i in id[1:2]:
    new_slc[tuple(rps[i].coords.T)]=i+1

enter image description here

then I ran the algorithm explained here:Measuring the distance between two lines using DipLib (PyDIP)

however, it returns an error

enter image description here

Can someone advise, why the algorithm is not working on the second portion of my image? what's the difference and why I am getting errors?


UPDATE regarding Pre-processing


====================

median=cv2.medianBlur(img,13)
ret, th = cv2.threshold(median, 0 , 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
kernel=np.ones((3,15),np.uint8)
closing1 = cv2.morphologyEx(th, cv2.MORPH_CLOSE, kernel, iterations=2)
kernel=np.ones((1,31),np.uint8)
closing2 = cv2.morphologyEx(closing1, cv2.MORPH_CLOSE, kernel)

label_image=measure.label(closing2, connectivity=closing2.ndim)
props= measure.regionprops_table (label_image, properties=['label'])

kernel=np.ones((1,13),np.uint8)
opening1= cv2.morphologyEx(closing2, cv2.MORPH_OPEN, kernel,  iterations=2)


label_image=measure.label(opening1, connectivity=opening1.ndim)
props= measure.regionprops_table (label_image, properties=['label', "area", "coords"])

Original photo: Below I showed low quality of my original image, so you have better understanding as why I did all the pre-processing

enter image description here

Ross_you
  • 881
  • 5
  • 22
  • 1
    Yeah, that algorithm assumes some minimal distance between the two sides, for the thin line it doesn’t find two separate edges, it detects both edges as one object because they’re very close together. I encourage you to run the algorithm step by step and look at the intermediate images, to understand how it works! In the meantime, I’ll think of a different approach that works on thin lines like these. – Cris Luengo Sep 22 '22 at 02:48
  • related question: https://stackoverflow.com/questions/73792621/how-to-measure-average-thickness-of-labeled-segmented-image – Christoph Rackwitz Oct 06 '22 at 08:25
  • new question: https://stackoverflow.com/questions/74047007/how-to-detect-black-contour-in-image-using-open-cv – Christoph Rackwitz Oct 12 '22 at 19:22

1 Answers1

1

For the thin line the program from the other Q&A doesn't work because the two edges along the line are too close together, the program doesn't manage to identify the two as separate objects.

For a very thin line like this you could do something that is not possible with a thicker line: just measure its length and area, the width will be the ratio of those two:

# `label_image` is as in the OP
# `id` is the label ID for the thin line
msr = dip.MeasurementTool.Measure(label_image.astype(np.uint32), features=['Size','Feret'])
area = msr[id]['Size'][0]
length = msr[id]['Feret'][0]

width = area / length

Note that you should be able to get a more precise value if you don't immediately binarize the image. The linked Q&A uses grayscale input image to determine the location of the edges more precisely than you can do after binarizing.

Cris Luengo
  • 55,762
  • 10
  • 62
  • 120
  • Thanks @Cris. Unfortunately the code will return error. As you suggested I am using label image (in my case, it's `new_slc`- thin part), then I use `img2 = dip.Image(new_slc)` to change it to image compatible with dip and then run your code (same thing as I did for thicker part). The code return this error: `data type not supported in function: Measure (D:\a\diplib\diplib\src\measurement\measurement_tool.cpp at line number 181)` do I need to replace `id` in your code with any values? I also ran `dip.MeasurementTool.Measure` section of code separately on `new_slc` and it returned same error. – Ross_you Sep 26 '22 at 17:59
  • Also, can you please provide more details regarding binarizing the image? I would like to increase the accuracy for thickness estimation for thick part, but I am not sure what you mean by `binarizing` the image. – Ross_you Sep 26 '22 at 18:04
  • @Ross_you I've updated the code in the post. I just realized that `measure.label` returns a signed integer array, it needs to be unsigned for `MeasureTool`. `id` is the number that corresponds to the thin line, in your code `id` is an array, so use `id[1]` here. – Cris Luengo Sep 26 '22 at 21:06
  • "binarizing" means turning into a binary (two-level) image. Maybe you thresholded the original image, I don't know. By binarizing, you lose the exact location of the edge of objects, you round the location to the nearest pixel. With the original gray-scale information, you can often locate an edge with a much higher precision. That is what I do in the code of the other Q&A that you linked. I don't have enough information about your original image to help you with that though. – Cris Luengo Sep 26 '22 at 21:08
  • thanks for explaining details. I've used the code you suggested `area = msr[1]['Size'][0]` and `length = msr[1]['Feret'][0]` and the width calculated as 7.73. I am assuming this is in `pixel` unit, correct? I multiplied this number by 0.000314 mm (pixel size) and ended up with 2.43 micor.m. the measured number is about 3.02 micor.m, so I guess I can claim it's a good estimate. please correct me if I am wrong in conversion from pixel to actual dimension – Ross_you Sep 26 '22 at 22:15
  • regarding binarizing, as I explained in the question (please see update I added to question), I read my image using `cv2.imread` command and then I used median, threshold , closing and opening filter to achieve a good quality image. Are these methods affecting accuracy of measurements? for your information I included my pre-processing steps in the question above (as `UPDATE` section) – Ross_you Sep 26 '22 at 22:18
  • I also included lower quality of original image, so you can see why I used so many filters to clean and pre-process the image – Ross_you Sep 26 '22 at 22:31