I have learnt that it is the randomness of the pixels. But please help with how this randomness is being calculated mathematically. And also how different images will have different entropy.
-
It's not really the randomness, rather how unexpected each value is given the other values. Thus, it is related to the amount of information present in the image. – Cris Luengo May 13 '18 at 07:44
2 Answers
You may as well calculate the Shannon entropy straight from your img
. Just do:
import skimage.measure
entropy = skimage.measure.shannon_entropy(img)
If you want to see the maths behind:
import numpy as np
marg = np.histogramdd(np.ravel(img), bins = 256)[0]/img.size
marg = list(filter(lambda p: p > 0, np.ravel(marg)))
entropy = -np.sum(np.multiply(marg, np.log2(marg)))
First, marg
is the marginal distribution of the two dimensional grayscale image img
. bins
is set to 256 for an 8-bit image. Then you need to filter out the probabilities that are equal to zero and finally sum over the remaining elements np.multiply(marg, np.log2(marg))
, as defined by Shannon's entropy.

- 3,677
- 2
- 30
- 48

- 341
- 2
- 7
-
-
6
-
Thx for showing the math behind shannon_entropy() function. How many bins does skimage.measure.shannon_entropy() use if the pixels of the grayscale image are of type 32 bit float? – shparekh Apr 26 '23 at 16:25
The entropy of an image is defined as follows:
where n is the number of gray levels (256 for 8-bit images), pi is the probability of a pixel having gray level i, and b is the base of the logarithm function.
Notice that the entropy of an image is rather different from the entropy feature extracted from the GLCM (Gray-Level Co-occurrence Matrix) of an image. Take a look at this post to learn more.
As per your request, I'm attaching an example of how the entropy of a GLCM is computed:
First we import the necessary modules:
import numpy as np
from skimage.feature import greycomatrix
Then we read the image:
img = io.imread('https://i.stack.imgur.com/07DZW.png')
The GLCM (corresponding to the pixel to the right) of the image above is computed as follows:
glcm = np.squeeze(greycomatrix(img, distances=[1],
angles=[0], symmetric=True,
normed=True))
And finally we apply this formula to calculate the entropy:
where p(i, j) represents the entries of the GLCM.
If we set b to 2, the result is expressed in bits.
entropy = -np.sum(glcm*np.log2(glcm + (glcm==0)))
# yields 10.704625483788325

- 13,398
- 16
- 46
- 80
-
What does "corresponding to the pixel to the right" mean? Off of the image? – mLstudent33 Nov 16 '19 at 00:20
-
The GLCM models the distribution of co-occurring pixel values at a given offset. In the example above the _offset_ used to compute the GLCM is "one pixel to the right". – Tonechas Nov 16 '19 at 00:33
-
@Tonechas if i want to calculate the entropy for a color picture what should i do? .. this code is working only for grayscale. – Zewo Jun 19 '20 at 17:18
-
1Replacing pᵢ, the probability of gray level `i`, by the joint probability of color `(r, g, b)`, pʳᵍᵇ, would do the trick. Yo need to introduce this change into the first formula. – Tonechas Jun 19 '20 at 21:24
-
Note your edit requests. The equations look terrible to the point of unreadability. – havakok Jan 09 '23 at 15:13