2

I have an image and I've done some pre-processing on the that image. Below I showed my preprocessing:

img= cv2.imread("...my_drive...\\image_69.tif",0)

median=cv2.medianBlur(img,13)
ret, th = cv2.threshold(median, 0 , 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
kernel=np.ones((3,15),np.uint8)
closing1 = cv2.morphologyEx(th, cv2.MORPH_CLOSE, kernel, iterations=2)
kernel=np.ones((1,31),np.uint8)
closing2 = cv2.morphologyEx(closing1, cv2.MORPH_CLOSE, kernel)

kernel=np.ones((1,13),np.uint8)
opening1= cv2.morphologyEx(closing2, cv2.MORPH_OPEN, kernel,  iterations=2)

So, basically I used "Threshold filtering" , "closing" and "opening" and the result looks like this:

enter image description here

Please note that when I used type(opening1), I got numpy.ndarray. So the image at this step is numpy array with 1021 x 1024 size.

Then I labeled my image:

label_image=measure.label(opening1, connectivity=opening1.ndim)
props= measure.regionprops_table (label_image, properties=['label', "area", "coords"])

and the result looks like this

enter image description here

Please note that when I used type(label_image), I got numpy.ndarray. So the image at this step is numpy array with 1021 x 1024 size.

As you can see, currently the image has 6 labels. Some of these labels are short and small pieces, so I tried to keep top 2 label based on area

slc=label_image
rps=regionprops(slc)
areas=[r.area for r in rps]

id=np.argsort(props["area"])[::-1]
new_slc=np.zeros_like(slc)

for i in id[:2]:
    new_slc[tuple(rps[i].coords.T)]=i+1

Now the result looks like this:

enter image description here

It looks like I was successful in keeping 2 top regions (please note that by changing id[:2] you can select thickest white layer or thin layer). Now:

What I want to do: I want to find the average thickness of these two regions

Also, please note that I know each of my pixels is 314 nm

Can anyone here advise how I can do this task?

Original photo: Below I showed low quality of my original image, so you have better understanding as why I did all the pre-processing

enter image description here

you can also access the original photo here : https://www.mediafire.com/file/20h66aq83edy1h7/img.7z/file

Ross_you
  • 881
  • 5
  • 22
  • For your last question, change the color map to a grayscale map. For your first question, the easiest way would be to just measure the vertical thickness along each column. If you want the more proper thickness, get the direction of the thin line by fitting with hough line transform and computing the angle. Then get the thickness at various places at the perpendicular to that angle. – fmw42 Sep 20 '22 at 21:19
  • @fmw42 thanks for comment about 2nd question. Regarding first question, would you be able to provide more details? I am not really familiar with "hough line transform". Even if you can provide more information regarding the simple method you mentioned (vertical thickness along each column) would be useful – Ross_you Sep 20 '22 at 21:25
  • If you want vertical distance, separate the two regions so you only deal with one at a time, unless you want the distance including both. Then simply get the count of non-zero values for each column. See Numpy np.count_nonzero(). Then average the counts from each column to get the average for all the columns. If you want the thickness along the perpendicular to the direction of your data, then see for example https://stackoverflow.com/questions/66338814/measuring-the-width-of-several-points-in-a-mask-image-based-on-another-mask-imag/66342550#66342550 – fmw42 Sep 20 '22 at 23:44
  • Does this answer your question? [Measuring the distance between two lines using DipLib (PyDIP)](https://stackoverflow.com/questions/70560162/measuring-the-distance-between-two-lines-using-diplib-pydip) – Cris Luengo Sep 21 '22 at 02:56
  • You could get an estimate of the thickness by dividing the are of each connected component by its width. – beaker Sep 21 '22 at 15:47
  • @CrisLuengo Thanks for your suggestion. Unfortunately I am not familiar with DipLib and I am trying to use CV2 to do the measurement. Based on your experience, do you suggest switching to `DipLib`? – Ross_you Sep 21 '22 at 18:04
  • @beaker Thanks. I can find the area; however, finding the width is still my challenge. I am not sure how to measure any dimensional thing (width, thickness, etc) – Ross_you Sep 21 '22 at 18:06
  • @CrisLuengo I tried to use your suggested approach, but in the first step where we set the pixel size (`img.SetPixelSize(0.314, "mm")` I got error. It looks like the output of my code (new_slc) is not compatible with some of the functions you had. For this case the error I got is `''numpy.ndarray' object has no attribute 'SetPixelSize'` – Ross_you Sep 21 '22 at 18:26
  • if `img` is your NumPy array, then do `img = dip.Image(img)` before running the DIPlib code. This will convert it into a DIPlib image object that has the methods being used in that answer. -- Yes, I would suggest you switch to DIPlib. I am an author of DIPlib, and have been using it for >20 years, and think it's wonderful. Some things might be different from what you expect, but many things are actually easier to do than in other systems, especially if you're interested in measuring things. – Cris Luengo Sep 21 '22 at 19:35
  • @CrisLuengo thanks so much. I'll give a try with `Diplib` then and see what result I will get. In a case that I have issue, I'll ask it in a separate post related to Diplib. Thanks again for your advice – Ross_you Sep 21 '22 at 20:29
  • @Ross_you Well, `regionprops_table` returns [major and minor axis length](https://scikit-image.org/docs/stable/api/skimage.measure.html#skimage.measure.regionprops)... – beaker Sep 21 '22 at 22:30
  • @CrisLuengo I actually tried `DipLib` and got a good result for one section of my image; however, I got the error for the second section and I am not sure why. I asked another question specifically related to the error here : https://stackoverflow.com/questions/73807919/how-to-measure-average-thickness-of-segmented-image-using-diplib – Ross_you Sep 22 '22 at 02:36
  • new question: https://stackoverflow.com/questions/74047007/how-to-detect-black-contour-in-image-using-open-cv – Christoph Rackwitz Oct 12 '22 at 19:22

3 Answers3

2

Here is one way to do that in Python/OpenCV.

  • Read the input
  • Convert to gray
  • Threshold to binary
  • Get the contours and filter on area so that we have only the two primary lines
  • Sort by area
  • Select the first (smaller and thinner) contour
  • Draw it white filled on a black background
  • Get its skeleton
  • Get the points of the skeleton
  • Fit a line to the points and get the rotation angle of the skeleton
  • Loop over each of the two contours and draw them white filled on a black background. Then rotate to horizontal lines. Then get the vertical thickness of the lines from the average thickness along each column using np.count_nonzero() and print the value.
  • Save intermediate images

Input:

enter image description here

import cv2
import numpy as np
import skimage.morphology
import skimage.transform
import math

# read image
img = cv2.imread('lines.jpg')

# convert to grayscale
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)

# threshold
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)[1]

# get contours
new_contours = []
img2 = np.zeros_like(thresh, dtype=np.uint8)
contour_img = thresh.copy()
contour_img = cv2.merge([contour_img,contour_img,contour_img])
contours = cv2.findContours(thresh , cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]
for cntr in contours:
    area = cv2.contourArea(cntr)
    if area > 1000:
        cv2.drawContours(contour_img, [cntr], 0, (0,0,255), 1)
        cv2.drawContours(img2, [cntr], 0, (255), -1)
        new_contours.append(cntr)

# sort contours by area
cnts_sort = sorted(new_contours, key=lambda x: cv2.contourArea(x), reverse=False)

# select first (smaller) sorted contour
first_contour = cnts_sort[0]
contour_first_img = np.zeros_like(thresh, dtype=np.uint8)
cv2.drawContours(contour_first_img, [first_contour], 0, (255), -1)

# thin smaller contour
thresh1 = (contour_first_img/255).astype(np.float64)
skeleton = skimage.morphology.skeletonize(thresh1)
skeleton = (255*skeleton).clip(0,255).astype(np.uint8)

# get skeleton points
pts = np.column_stack(np.where(skeleton.transpose()==255))

# fit line to pts
(vx,vy,x,y) = cv2.fitLine(pts, cv2.DIST_L2, 0, 0.01, 0.01)
#print(vx,vy,x,y)
x_axis = np.array([1, 0])    # unit vector in the same direction as the x axis
line_direction = np.array([vx, vy])    # unit vector in the same direction as your line
dot_product = np.dot(x_axis, line_direction)
[angle_line] = (180/math.pi)*np.arccos(dot_product)
print("angle:", angle_line)

# loop over each sorted contour
# draw contour filled on black background
# rotate
# get mean thickness from np.count_non-zeros
black = np.zeros_like(thresh, dtype=np.uint8)
i = 1
for cnt in cnts_sort:
    cnt_img = black.copy()
    cv2.drawContours(cnt_img, [cnt], 0, (255), -1)
    cnt_img_rot = skimage.transform.rotate(cnt_img, angle_line, resize=False)
    thickness = np.mean(np.count_nonzero(cnt_img_rot, axis=0))
    print("line ",i,"=",thickness)
    i = i + 1

# save resulting images
cv2.imwrite('lines_thresh.jpg',thresh)
cv2.imwrite('lines_filtered.jpg',img2)
cv2.imwrite('lines_small_contour_skeleton.jpg',skeleton )

# show thresh and result    
cv2.imshow("thresh", thresh)
cv2.imshow("contours", contour_img)
cv2.imshow("lines_filtered", img2)
cv2.imshow("first_contour", contour_first_img)
cv2.imshow("skeleton", skeleton)
cv2.waitKey(0)
cv2.destroyAllWindows()

Threshold image:

enter image description here

Contour image:

enter image description here

Filtered contour image:

enter image description here

Skeleton image:

enter image description here

Angle (in degrees) and Thicknesses (in pixels):

angle: 3.1869032185349733
line  1 = 8.79219512195122
line  2 = 49.51609756097561

To get the thickness in nm, multiply thickness in pixels by your 314 nm/pixel.

ADDITION

If I start with your tiff image, the following shows my preprocessing, which is similar to yours.

import cv2
import numpy as np
import skimage.morphology
import skimage.transform
import math

# read image
img = cv2.imread('lines.tif')

# convert to grayscale
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)

# threshold
thresh = cv2.threshold(gray, 128, 255, cv2.THRESH_BINARY)[1]

# apply morphology
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (1,5))
morph = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel)
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (29,1))
morph = cv2.morphologyEx(morph, cv2.MORPH_CLOSE, kernel)

# get contours
new_contours = []
img2 = np.zeros_like(gray, dtype=np.uint8)
contour_img = gray.copy()
contour_img = cv2.merge([contour_img,contour_img,contour_img])
contours = cv2.findContours(morph , cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]
for cntr in contours:
    area = cv2.contourArea(cntr)
    if area > 1000:
        cv2.drawContours(contour_img, [cntr], 0, (0,0,255), 1)
        cv2.drawContours(img2, [cntr], 0, (255), -1)
        new_contours.append(cntr)

# sort contours by area
cnts_sort = sorted(new_contours, key=lambda x: cv2.contourArea(x), reverse=False)

# select first (smaller) sorted contour
first_contour = cnts_sort[0]
contour_first_img = np.zeros_like(morph, dtype=np.uint8)
cv2.drawContours(contour_first_img, [first_contour], 0, (255), -1)

# thin smaller contour
thresh1 = (contour_first_img/255).astype(np.float64)
skeleton = skimage.morphology.skeletonize(thresh1)
skeleton = (255*skeleton).clip(0,255).astype(np.uint8)

# get skeleton points
pts = np.column_stack(np.where(skeleton.transpose()==255))

# fit line to pts
(vx,vy,x,y) = cv2.fitLine(pts, cv2.DIST_L2, 0, 0.01, 0.01)
#print(vx,vy,x,y)
x_axis = np.array([1, 0])    # unit vector in the same direction as the x axis
line_direction = np.array([vx, vy])    # unit vector in the same direction as your line
dot_product = np.dot(x_axis, line_direction)
[angle_line] = (180/math.pi)*np.arccos(dot_product)
print("angle:", angle_line)

# loop over each sorted contour
# draw contour filled on black background
# rotate
# get mean thickness from np.count_non-zeros
black = np.zeros_like(thresh, dtype=np.uint8)
i = 1
for cnt in cnts_sort:
    cnt_img = black.copy()
    cv2.drawContours(cnt_img, [cnt], 0, (255), -1)
    cnt_img_rot = skimage.transform.rotate(cnt_img, angle_line, resize=False)
    thickness = np.mean(np.count_nonzero(cnt_img_rot, axis=0))
    print("line ",i,"=",thickness)
    i = i + 1

# save resulting images
cv2.imwrite('lines_thresh2.jpg',thresh)
cv2.imwrite('lines_morph2.jpg',morph)
cv2.imwrite('lines_filtered2.jpg',img2)
cv2.imwrite('lines_small_contour_skeleton2.jpg',skeleton )

# show thresh and result    
cv2.imshow("thresh", thresh)
cv2.imshow("morph", morph)
cv2.imshow("contours", contour_img)
cv2.imshow("lines_filtered", img2)
cv2.imshow("first_contour", contour_first_img)
cv2.imshow("skeleton", skeleton)
cv2.waitKey(0)
cv2.destroyAllWindows()

Threshold image:

enter image description here

Morphology image:

enter image description here

Filtered Lines image:

enter image description here

Skeleton image:

enter image description here

Angle (degrees) and Thickness (pixels):

angle: 3.206927978669998
line  1 = 9.26171875
line  2 = 49.693359375
fmw42
  • 46,825
  • 10
  • 62
  • 80
  • Comments are not for extended discussion; this conversation has been [moved to chat](https://chat.stackoverflow.com/rooms/248618/discussion-on-answer-by-fmw42-how-to-measure-average-thickness-of-labeled-segmen). – Samuel Liew Oct 07 '22 at 02:52
1
  • Use Deskew to straighten up the image.

enter image description here

  • Then, count the pixels of each column of the color of the label you want to measure then divide it by the number of columns to get the average thickness
user16930239
  • 6,319
  • 2
  • 9
  • 33
1

This can be done with various tools in scipy. Assume you have the image here:

I = PIL.Image.open("input.jpg")
img = np.array(I).mean(axis=2)
mask = img==255  # or some kind of thresholding
imshow(mask)   #note this is a binary image, the green coloring is due to some kind of rendering artifact or aliasing

image mask

If one zooms in they can see split up regions enter image description here

To get around that we can dilate the mask

from scipy import ndimage as ni
mask1 = ni.binary_dilation(mask, iterations=2)
imshow(mask1)

enter image description here

Now, we can find connected regions, and find the top regions with the most pixels, which should be the two lines of interest:

lab, nlab = ni.label(mask1)
max_labs = np.argsort([ (lab==i).sum() for i in range(1, nlab+1)])[::-1]+1
imshow(lab==max_labs[0])

enter image description here

and imshow(lab==max_labs[1]) enter image description here

Working with the first line as an example:

from scipy.stats import linregress
y0,x0 = np.where(lab==max_labs[0])
l0 = linregress( x0, y0)
xi,yi =  np.arange(img.shape[3]), np.arange(img.shape[3])*l0.slope + l0.intercept
plot( xi, yi, 'r--')

enter image description here

Interpolate along this region at different y-intercepts and compute the average signal along each line

from scipy.interpolate import RectBivariateSpline
img0 = img.copy()
img0[~(lab==max_labs[0])] = 0  # set everything outside this line region to 0
rbv = RectBivariateSpline(np.arange(img.shape[0]), np.arange(img.shape[1]), img0)
prof0 = [rbv.ev(yi+i, xi).mean() for i in np.arange(-300,300)]  # pick a wide window here (-300,300), can be more technical, but not necessary
plot(prof0)

enter image description here

Use your favorite method to compute the FWHM of this profile, then multiply by your pixel-to-nanometers factor.

I would just use a Gaussian fit to compute fwhm

xvals = np.arange(len(prof0))
yvals = np.array(prof0)


def func(p, xvals, yvals):
    mu,var, amp = p
    model = np.exp(-(xvals-mu)**2/2/var)*amp
    resid = (model-yvals)**2
    return resid.sum()
from scipy.optimize import minimize
x0  = 300,200,255 # initial estimate of mu, variance, amplitude 
fit_gauss = minimize(func, x0=x0, args=(xvals, yvals), method='Nelder-Mead')

mu, var, amp = fit_gauss.x
fwhm = 2.355 * np.sqrt(var)

# display using matplotlib plot /hlines
plot( xvals, yvals)
plot( xvals, amp*np.exp(-(xvals-mu)**2/2/var) )
hlines(amp*0.5, mu-fwhm/2., mu+fwhm/2, color='r')
legend(("profile","fit gauss","fwhm=%.2f pix" % fwhm))

enter image description here

Finally, thickness=fwhm*314, or about 13 microns.

Following the exact same approach for the second line (lab==max_labs[1]) gives a thickness of about 2.2 microns: enter image description here

Note, I was using interactive plotting to do this example, hence calls to imshow , plot etc. are meant motly as a reference to the reader. One may need to take extra steps to recreate the exact images I've uploaded (zooming etc).

dermen
  • 5,252
  • 4
  • 23
  • 34
  • 1
    Thanks so much @dermen for all the effort you put on this. I wish I could accept 2 correct answers. your approach is really interesting. It gives less accurate result comparing to the first method. I believe in the above histograms, you are taking average of signal. Maybe changing it to weighted average or calibrating or one picture can help me getting better result. I will review it on my all picture samples and will ask you here if I have more questions. Thanks – Ross_you Oct 03 '22 at 21:29
  • it was a fun problem to work out, gave me a distraction during zoom meetings – dermen Oct 03 '22 at 21:35
  • You should definitely threshold at 128, not at 255. How you do it causes JPEG artifacts to ruin the image. The dilation you apply makes the lines thicker, meaning your width measurement will be overestimated. Also, because the line is not horizontal, the vertical cross-sections do not represent the width of line line, they make an angle with the normal. This also causes an overestimation. – Cris Luengo Oct 04 '22 at 23:37
  • Actually, I'm not doing vertical cross sections, rather I'm interpolating along lines parallel to the region of interest -changing the intercept simply moves the diagonal line (red dashed line) up/down, thats how I form the profile. Interestingly my approach gives an underestimate, considering the results shown in the accepted answer. Your point about 128 vs 255 is interesting, and it would increase the thicknesses I derive. – dermen Oct 05 '22 at 00:36