I'm trying to do something like this. I need to extract the relative light intensity at each point from the image, and I would like to know how to do it.
The first thing that comes to my mind is to convert the image into black-and-white. I've found three different algorithms here. I used my own image as a test to try all three algorithms and the built-in function in the Python Image library image.convert('1')
. The first two algorithms give some strange results for the darkened parts (like my hair, eyebrows, etc.); the third algorithm 'luminosity' gives a result very similar to what I get using some image-processing software. While the Python built-in one just gives something ridiculous. I'm not sure which one is the best representation of light intensity, and also I'm not sure if the camera will already do some self-adjustments for different images when the images all have different light orientations.