This answer got a little to long for a comment:
It is not quite clear to me what you are trying to do. Do you want to count all distinguishable elements in an image, or do you want to count all shades of e.g. red as one color?
Also images can be represented using different kind of datatypes (eg. float
∈ [-1, 1] or uint8
∈ [0, 255]).
In your first line you apparently expect to have an image of type float according to range=(0,1)
.
With binning you might loose the information about distinguishable elements and therefore not be able to count the number of distinguishable elements.
Counting distinguishable colors
For counting all available colors (= distinguishable elements) in a gray scale image you can use the following one-liner. (This will of course also work for float
images. If you really want to distinguish between every color this is perfect, but if not your command np.histogram
is a good idea.)
len(set(gray_image.flatten()))
As for the error you got with scipy.signal.find_peaks_cwt(histogram, 10)
, find_peaks_cwt()
expects a 1D array as second argument. If you provide a numpy array it will work just fine.
Counting clusters of similar colors
In case you want to cluster similar colors and not count them twice, there are different approaches you can choose. The keyword here is "color quantization".
As shown in this post you can use clustering algorithms to quantize the colors used in the image.
After color quantization you can simply reshape the image to preserve the RGB tuples and use numpys unique
method like that:
import numpy as np
len(np.unique(color_image.reshape(-1, 3), axis = 0))
There are a lot of methods to reduce the number of colors. Pillow has two functions for that posterize
and quantize
. a method with k-means using Scikit learn was shown in the post i mentioned earlier and also you could try to use a distance metric when using a predefined set of colors.