6

I am performing feature detection in a video using MATLAB. The lighting condition varies in different parts of the video, leading to some parts getting ignored while transforming the RGB images to binary images.

The lighting condition in a particular portion of the video also changes over the course of the video.

Can you suggest best method in MATLAB to balance the lighting across the frame and the video?

Sulla
  • 7,631
  • 9
  • 45
  • 71
  • [Homomorphic Filtering](http://en.wikipedia.org/wiki/Homomorphic_filtering) might help you. – Lucas Feb 15 '12 at 13:34

3 Answers3

8

You have two options, depending on what features you want to detect and what you want to do with the video.

  1. Ignore the illumination of the images because (as you have concluded) this contains useless or even misleading information for your feature detection.
  2. Try to repair the illumination unevenness (which is what you ask for).

1) Is quite easy to do: Convert your image to a colourspace that separates out illumination in a separate channel such as: HSV (ignore the V channel) Lab (ignore L) YUV (ignore Y) and perform your feature detection on the two remaining channels. Of these HSV is the best (as noted by Yves Daoust in the comments) YUV and Lab leave some illumination information in the UV / ab channels. In my experience the last two also work depending on your situation, but HSV is best.

2) Is harder. I'd start by converting the image to HSV. Then you do the reparation on just the V channel:

  • Apply a gaussian blur to the V channel image with a very large value for sigma. This gives you a local average for the illumination. Compute the global average V value for this image (this is one number). Then Subtract the local average value from the actual V value for each pixel and add the global average. You have now done very crude illumination equalization. You can play around a bit with the value for sigma to find a value that works best.
  • If this fails, look into the options zenopy gives in his answer.

Whichever method you choose, I advise you to concentrate on what you want to do (i.e. detect features) and choose intermediate steps such as this one that suffice for your needs. So quickly try something, see how this helps your feature detection,

Community
  • 1
  • 1
jilles de wit
  • 7,060
  • 3
  • 26
  • 50
  • Ignoring L of Lab or Y of YUV won't work that well; these quantities are increasing functions of the illumination (not homogeneous quantities). The Saturation/Value pair is more appropriate because these coordinates are based on ratios of the RGB components, hence insensitive to the illumination. Any pair in xyz (not XYZ) can be used as well. –  Feb 17 '12 at 16:17
  • Isn't removing illumination variation the entire point of the question of the original poster? And hence ignoring L/Y or V a handy shortcut that could be good enough? – jilles de wit Feb 19 '12 at 12:04
  • No. Coordinates -ab of Lab and -UV of YUV still contain illumination information (they are small for a dark image and large for a light image; -ab are proportional to the cubic root of intensity and -UV are directly proportional to the intensity). HS- of HSV and xyz are illumination independent. –  Feb 20 '12 at 10:02
  • Ah, yes, you are right. I was being sloppy and forgetfull. In my experience a dropped L or Y channel often provides enough illumination insensitivity for simple cases. This is convenient especially in the case of YUV which you can often acquire directly from your camera. – jilles de wit Feb 20 '12 at 11:58
  • jilles, I am sorry I have to come back on this and explain for the last time. -UV of YUV do not provide illumination insensivity AT ALL. They are just linear combinations of RGB. If you double the amount of light, you double R, G, B, and Y, U and V alltogether. The case of -ab in Lab is more exotic, since it goes as the 1/3th power of intensity. Should you double the intensity, a and b get multiplied by the cube root of 2 = 1.26. On the opposite, when you double intensities, H and S of HSV remain absolutely unchanged. You get insensitive only with RATIOS involving R, G, B components. –  Feb 20 '12 at 14:24
  • I partially agree. As an (admittedly slightly lame) example: U and V never change as you go from R,G,B 0,0,0 to 255,255,255 (or 1,1,1 if you will). Both are 0 all the time. For more interesting values of U and V, adding a certain amount of white light (R=G=B=x) will not change U and V as long as R,G,B stay within the 0,255 band. Adding coloured light will change U and V, but this can be called illumination insensitivity. If you have -say- a red surface under uneven white light then the entire surface should have reasonably uniform values for U and V, and a strongly varying value for Y. – jilles de wit Feb 20 '12 at 22:40
5

That is not a trivial task but there are many ways to try and overcome it. I can recommaend that you start with implementing the retinex algorithm, or use an implementation of others: http://www.cs.sfu.ca/~colour/publications/IST-2000/.

The basic idea is that the Luminance (observed image intensity) = Illumination (incident light) x Reflectance (percent reflected):

L(x,y) = I(x,y) x R(x,y) 

And you are interested in the R part.

To work on color images for each frame first move to hsv color space and operate the retinex on the v (value) part.

Hope that makes sense.

zenpoy
  • 19,490
  • 9
  • 60
  • 87
  • Hi zenpoy, If you have time please look at this question https://stackoverflow.com/questions/63933790/robust-algorithm-to-detect-uneven-illumination-in-images-detection-only-needed – Sivaram Rasathurai Oct 15 '20 at 01:46
5

Aside from illumination unevenness across individual images, which is addressed by Retinex or by highpass filtering, you can think of Automatic Gain Correction across the video.

The idea is to normalise the image intensities by applying a linear transform to the color components, in such a way that the average and standard deviations of all three channels combined become predefined values (average -> 128, standard deviation -> 64).

Histogram equalization will have a similar effet of "standardizing" the intensity levels.

Unfortunately, large scene changes will impact this process in such a way that the intensities of the background won't remain constant as you'd expect them.