-2

I'm trying to do something like this. I need to extract the relative light intensity at each point from the image, and I would like to know how to do it.

The first thing that comes to my mind is to convert the image into black-and-white. I've found three different algorithms here. I used my own image as a test to try all three algorithms and the built-in function in the Python Image library image.convert('1'). The first two algorithms give some strange results for the darkened parts (like my hair, eyebrows, etc.); the third algorithm 'luminosity' gives a result very similar to what I get using some image-processing software. While the Python built-in one just gives something ridiculous. I'm not sure which one is the best representation of light intensity, and also I'm not sure if the camera will already do some self-adjustments for different images when the images all have different light orientations.

Artjom B.
  • 61,146
  • 24
  • 125
  • 222
Physicist
  • 2,848
  • 8
  • 33
  • 62
  • OK mate, you're pretty undescriptive and vulgar, which isn't great. `python built-in one just gives something s***` tells us **nothing**. Same applies to `strange results for the darkened parts` and `some image-processing`. Be concrete, tell us what you've tried, what the problems are specifically and how you've tried to solve them. Attaching code or pictures helps. – Aleksander Lidtke May 06 '15 at 11:46
  • Try converting to HSL and then using L.... http://stackoverflow.com/questions/2353211/hsl-to-rgb-color-conversion – Mark Setchell May 06 '15 at 12:41

1 Answers1

1

FWIW, there are 2 versions of PIL. The original one is rather outdated, but there's a new fork called Pillow. Hopefully, you're using Pillow, but to use it effectively you need to be familiar with the Pillow docs.

image.convert('1') is not what you want here: it converts an image to 1 bit black & white, i.e., there are no greys, only pure black and pure white. The correct image mode to use is 'L' (luminance) which gives you an 8 bit greyscale image. The formula that PIL/Pillow uses to perform this conversion is

L = R * 299/1000 + G * 587/1000 + B * 114/1000

Those coefficients are quite common: eg, they're used by ppmtopgm; IIRC, they've been used since the days of NTSC analog TV. However, they may not be appropriate to other colour spaces (mostly due to issues related to gamma correction). See the Wikipedia article on the YUV colour space & linked articles for a few other coefficient sets.

Of course, it's easy enough to do the conversion with other coefficients, by operating on the pixel tuples returned by getdata, but that will be slower than using the built-in conversion.

PM 2Ring
  • 54,345
  • 6
  • 82
  • 182