16

I am using skimage library for most of image analysis work.

I have an RGB image and I intend to extract texture features like entropy, energy, homogeneity and contrast from the image.

Below are the steps that I am performing:

from skimage import io, color, feature
from skimage.filters import rank
rgbImg = io.imread(imgFlNm)
grayImg = color.rgb2gray(rgbImg)
print(grayImg.shape)  # (667,1000), a 2 dimensional grayscale image

glcm = feature.greycomatrix(grayImg, [1], [0, np.pi/4, np.pi/2, 3*np.pi/4])
print(glcm.shape) # (256, 256, 1, 4)

rank.entropy(glcm, disk(5)) # throws an error since entropy expects a 2-D array in its arguments

rank.entropy(grayImg, disk(5)) # given an output.

My question is, is the calculated entropy from the gray-scale image(directly) same as the entropy feature extracted from the GLCM (a texture feature)?

If not, what is the right way to extract all the texture features from an image?

Notes: I have already referred to:

Entropy - skimage

GLCM - Texture features

Tonechas
  • 13,398
  • 16
  • 46
  • 80
Sreejith Menon
  • 1,057
  • 1
  • 18
  • 27

2 Answers2

25

Is the calculated entropy from the gray-scale image (directly) same as the entropy feature extracted from the GLCM (a texture feature)?

No, these two entropies are rather different:

  1. skimage.filters.rank.entropy(grayImg, disk(5)) yields an array the same size as grayImg which contains the local entropy across the image computed on a circular disk with center at the the corresponding pixel and radius 5 pixels. Take a look at Entropy (information theory) to find out how entropy is calculated. The values in this array are useful for segmentation (follow this link to see an example of entropy-based object detection). If your goal is to describe the entropy of the image through a single (scalar) value you can use skimage.measure.shannon_entropy(grayImg). This function basically applies the following formula to the full image:
    entropy
    where n is the number of gray levels (256 for 8-bit images), probability is the probability of a pixel having gray level intensity, and base is the base of the logarithm function. When base is set to 2 the returned value is measured in bits.
  2. A gray level co-occurence matrix (GLCM) is a histogram of co-occurring grayscale values at a given offset over an image. To describe the texture of an image it is usual to extract features such as entropy, energy, contrast, correlation, etc. from several co-occurrence matrices computed for different offsets. In this case the entropy is defined as follows:
    entropy of GLCM
    where n and base are again the number of gray levels and the base of the logarithm function, respectively, and GLCM element stands for the probability of two pixels separated by the specified offset having intensities intensity and j. Unfortunately the entropy is not one of the properties of a GLCM that you can calculate through scikit-image*. If you wish to compute this feature you need to pass the GLCM to skimage.measure.shannon_entropy.

*At the time this post was last edited, the latest version of scikit-image is 0.13.1.

If not, what is the right way to extract all the texture features from an image?

There are a wide variety of features to describe the texture of an image, for example local binary patterns, Gabor filters, wavelets, Laws' masks and many others. Haralick's GLCM is one of the most popular texture descriptors. One possible approach to describe the texture of an image through GLCM features consists in computing the GLCM for different offsets (each offset is defined through a distance and an angle), and extracting different properties from each GLCM.

Let us consider for example three distances (1, 2 and 3 pixels), four angles (0, 45, 90 and 135 degrees) and two properties (energy and homogeneity). This results in 12 offsets (and hence 12 GLCM's) and a feature vector of dimension 24. Here's the code:

import numpy as np
from skimage import io, color, img_as_ubyte
from skimage.feature import greycomatrix, greycoprops
from sklearn.metrics.cluster import entropy

rgbImg = io.imread('https://i.stack.imgur.com/1xDvJ.jpg')
grayImg = img_as_ubyte(color.rgb2gray(rgbImg))

distances = [1, 2, 3]
angles = [0, np.pi/4, np.pi/2, 3*np.pi/4]
properties = ['energy', 'homogeneity']

glcm = greycomatrix(grayImg, 
                    distances=distances, 
                    angles=angles,
                    symmetric=True,
                    normed=True)

feats = np.hstack([greycoprops(glcm, prop).ravel() for prop in properties])

Results obtained using this image:

sample image - lion:

In [56]: entropy(grayImg)
Out[56]: 5.3864158185167534

In [57]: np.set_printoptions(precision=4)

In [58]: print(feats)
[ 0.026   0.0207  0.0237  0.0206  0.0201  0.0207  0.018   0.0206  0.0173
  0.016   0.0157  0.016   0.3185  0.2433  0.2977  0.2389  0.2219  0.2433
  0.1926  0.2389  0.1751  0.1598  0.1491  0.1565]
Tonechas
  • 13,398
  • 16
  • 46
  • 80
  • I wondered if it is correct to pass `glcm` to `skimage.measure.shannon_entropy()` when `normed=True` is set? Why don't we use `normed=False`? – Gilfoyle Jul 21 '18 at 21:39
  • 2
    For the entropy formula to hold, p(·) has to be a probability, i.e. the sum of the GLCM entries has to be 1. This is why parameter `normed` must be set to `True` – Tonechas Jul 21 '18 at 21:47
  • Great post, only I don't understand-you say that usually, one does the analysis for several offsets (and then what do you do with them? How do you combine them?) and you say that you can compute the entropy of those GLCM matrices using `skimage.measure.shannon_entropy`. But at the same time, you say that you can just compute Shannon's entropy of the original image. What is the difference between those two approaches and which one to use? Why to generate all the offsetted GLCMs if you can just use the original image? Thanks – My Work May 29 '22 at 10:08
  • Using several offsets is the way to go if you intend to perform a multi-scale analysis. The features extracted from the GLCMs computed for different offsets are the components of the feature vector that models an image. – Tonechas May 30 '22 at 21:10
-1
from skimage.feature import greycomatrix, greycoprops

    dis = (greycoprops(glcm, 'dissimilarity'))
    plt.hist(dis.ravel(), normed=True, bins=256, range=(0, 30),facecolor='0.5');plt.show()
Yehadska
  • 65
  • 7