2

Inspired by this question and this answer (which isn't very solid) I realized that I often find myself converting to grayscale a color image that is almost grayscale (usually a color scan from a grayscale original). So I wrote a function meant to measure a kind of distance of a color image from grayscale:

import numpy as np
from PIL import Image, ImageChops, ImageOps, ImageStat

def distance_from_grey(img): # img must be a Pillow Image object in RGB mode
    img_diff=ImageChops.difference(img, ImageOps.grayscale(img).convert('RGB'))
    return np.array(img_diff.getdata()).mean()

img = Image.open('test.jpg')
print(distance_from_grey(img))

The number obtained is the average difference among all pixels of RGB values and their grayscale value, which will be zero for a perfect grayscale image.

What I'm asking to imaging experts is:

  • is this approach valid or there are better ones?
  • at which distance an image can be safely converted to grayscale without checking it visually?
mmj
  • 5,514
  • 2
  • 44
  • 51
  • 1
    I am not an expert. Intuitively, I would say you need to square the differences before adding them up, and then taking the square root again: Error = 1/N * sqrt(Sum error_i^2). In that case, if some pixels deviate a lot and others don't at all, this is considered worse than if every pixel deviates a little bit. – physicalattraction Dec 22 '22 at 12:37
  • 1
    You could use a perceptually uniform colourspace, e.g. JzAzBz, ICtCp, OkLab, convert to Lightness, Chroma, Hue (LCH) representation and check whether the Chroma is close to zero. – Kel Solaar Dec 23 '22 at 02:03
  • @KelSolaar Very interesting, I'm studying your comment, I'm sure many would be grateful if you showed how to do in an answer. – mmj Dec 23 '22 at 09:59
  • Not sure exactly what cases you need to discriminate between, but you could consider the saturation in HSV colourspace as an indication of greyness https://stackoverflow.com/a/74874586/2836621 – Mark Setchell Dec 23 '22 at 11:33

2 Answers2

1

Given the following 3 images and using Colour:

McDonald Lake Niagara Falls Colourful Pencils

import numpy as np
import colour

image_1 = colour.read_image("mcdonald_lake.png")
# "mcdonald_lake.png" is single channel, we convert it to 3
image_1 = colour.utilities.tstack([image_1, image_1, image_1])
image_2 = colour.read_image("niagara_falls.png")
image_3 = colour.read_image("colouring_pencils.png")

# Converting from assumed "sRGB" encoded, i.e. "Output-Referred" to "Oklab" using Colour's Automatic Colour Conversion Graph.
image_1_OkLab = colour.convert(image_1, "Output-Referred RGB", "Oklab")
image_2_OkLab = colour.convert(image_2, "Output-Referred RGB", "Oklab")
image_3_OkLab = colour.convert(image_3, "Output-Referred RGB", "Oklab")

# Converting from "Lightness" and "a", "b" opponent colour dimensions
# to "Lightness", "Chroma" and "Hue".
image_1_OkLab_JCh = colour.models.Jab_to_JCh(image_1_OkLab)
image_2_OkLab_JCh = colour.models.Jab_to_JCh(image_2_OkLab)
image_3_OkLab_JCh = colour.models.Jab_to_JCh(image_3_OkLab)

print(np.mean(image_1_OkLab_JCh[..., 1]))
print(np.mean(image_2_OkLab_JCh[..., 1]))
print(np.mean(image_3_OkLab_JCh[..., 1]))
6.14471772026e-05
0.0292843706963
0.0798391223111

If you want to use ICtCp for example, you can simply change "Oklab" for "ICtCp" above.

It is also possible to get a detailed overview of the computations ran by the graph by using the verbose={"mode": "Long"} argument:

colour.convert(image_1, "Output-Referred RGB", "Oklab", verbose={"mode": "Long"})

Google Colab Notebook: https://colab.research.google.com/drive/1aDyUa4hSeCn-Sj47nUOilRAghl0fpd_W?usp=sharing

mmj
  • 5,514
  • 2
  • 44
  • 51
Kel Solaar
  • 3,660
  • 1
  • 23
  • 29
  • Thanks, that's the quality answer I was looking for, it works great, the output values differentiate *almost grey* from color images by a factor of about 4, whereas with my method the factor is about 2.5, so that's a nice improvement and I'm sure that the results are more solid. – mmj Dec 23 '22 at 22:57
  • Did something change in the last few days in the `colour` package installation with `conda install -c conda-forge colour-science` ? Arranging another environment I got an error saying the `networkx` module was missing and I had to install it manually. – mmj Dec 29 '22 at 19:40
  • 1
    Nothing from our end! – Kel Solaar Dec 30 '22 at 03:46
0

This answer assumes your grayscaling function is idempotent, if this does not hold ignore this answer entirely.

is this approach valid

Depends on how yours inputs looks like and what you are expecting to do, consider certain edge case: you have image which consists of two parts, left half is grayscale and right half is colorful, what should happen?

better ones?

Depends on your definition of better, I suggest experimenting with other function in place of mean, e.g. max or median.

Daweo
  • 31,313
  • 3
  • 12
  • 25
  • My main purpose is detecting automatically when an image is a color scan from a grayscale original (like black and white photo or drawing), so I can convert it to grayscale without checking it visually. – mmj Dec 22 '22 at 13:43