6

I need to do fairly sensitive color (brightness) measurements in webcam footage, using OpenCV. The problem I am experiencing is that the ambient light fluctuates, which makes it hard to get accurate results. I'm looking for a way to continuously update sequential frames of the video to smooth out the global lighting differences. The light changes I'm trying to filter out occur globally in most or all of the image. I have tried to calculate a difference and subtract that, but with little luck. Does anyone have any advice on how to approach this problem?

EDIT: The 2 images below are from the same video, with color changes slightly magnified. If you alternate between them, you'll see that there's slight changes in lighting, probably due to clouds shifting outside. The problem is that these changes obscure any other color changes I might want to detect.

So I would like to filter out these particular changes. Given that I only need part of the frames I capture, I figured that it should be possible to filter out the lighting changes as they occur in the rest of the footage as well. Outside of my area of interest.

I have tried to capture the dominant frequencies in the changes using dft, to simply ignore changes in lighting. But I am not familiar enough with the use of that function. I have only been using opencv for a week, so I am still learning.

enter image description here enter image description here

FHannes
  • 783
  • 7
  • 22
  • can you upload an example for several frames so we'll have a better understanding of your questions? – ibezito Jul 01 '16 at 14:30
  • 1
    I'll do that when I get home. But to sketch the situation a bit. I am experimenting with Eulerian Video Magnification to amplify color changes in a video. The problem is that it also amplifies subtle lighting changes, which causes considerable noise in the video I'm trying to analyze. Since I only need to analyze part of the video, I gathered that as the lighting changes are global, I could somehow filter them out of the frames, without losing the color changes I'm trying to detect. – FHannes Jul 01 '16 at 15:29
  • have a look at http://stackoverflow.com/questions/24341114/simple-illumination-correction-in-images-opencv-c – Matthias Rittler Jul 04 '16 at 10:02
  • Though interesting, CLAHE seems to have little effect on my particular issue. IT also seems important to note that it works on single frames, not a time series. So any changes it makes might not be constant throughout all of the frames I'm analyzing. – FHannes Jul 05 '16 at 08:48
  • 1
    Can you transform the pixels from RGB into YUV representation and then work only on the YUV? Or normalize the RGB as a vector to work only with their orientation, not their luminance. Either of these with a global normalization/low pass filter would help. – Jason Harrison Jul 05 '16 at 19:29
  • Given the noise of the camera (high frequency time and space) I'd look to lowering the resolution through a low pass filter, or Guassian filter, and using that image as your baseline for further luminance change correction. – Jason Harrison Jul 05 '16 at 19:32
  • are you able to place an easily detectable reference object (QR Code?) in your image? – Micka Jul 06 '16 at 14:30
  • No, the goal is to work in a foreign environment. – FHannes Jul 06 '16 at 15:02
  • Rather than making this difficult by filtering out illumination changes from the resulting image, why don't you simply illuminate the target with lights bright enough to make ambient light variations negligible? For something the size of a mouse that should be pretty easy. – DisappointedByUnaccountableMod Oct 25 '17 at 13:55

3 Answers3

5

Short answer: temporal low-pass filter on illumination as a whole

Consider the illumination, conceptually, as a time sequence of values representing something like the light flux impinging upon the scene being photographed. Your ideal situation is that this function be constant, but the second-best situation is that it vary as slowly as possible. A low-pass filter changes a function that can vary rapidly to one that varies more slowly. The basic steps are thus: (1) Calculate a total illumination function (2) Compute a new illumination function using a low-pass filter (3) Normalize the original image sequence to the new illumination values.

(1) The simplest way of calculating an illumination function is to add up all the luminance values for each pixel in the image. In simple cases, this might even work; you might guess from my tone that there are a number of caveats.

An important issue is that you'd prefer to add up illumination values not in some color space (such as HSV) but rather some physical measure of illumination. Going back from a color space to the actual light in the room requires data that's not in an image, such as the spectral reflectivity of each surface in the image, so that's unlikely. As a proxy for this, you can use only part of the image, one that has a consistent reflectivity. In the sample images, the desk surface at the top of the image could be use. Select a geometric region and compute a total illumination number from that.

Related to this, if you have regions of the image where the camera has saturated, you've lost a lot of information and the total illumination value won't relate well to the physical illumination. Simply cut out any such regions (but do it consistently across all frames).

(2) Compute a low-pass filter on the illumination function. These transforms are a fundamental part of every signal processing package. I don't know enough about OpenCV to know if it's got appropriate function itself, so you might need another library. There are lots of different kinds of low-pass filters, but they should all give you similar results.

(3) Once you've got a low-pass time series, you want to use it as a normalization function for the total illumination. Compute the average value of the low-pass series and divide by it, yielding a time series with average value 1. Now transform each image by multiplying the illumination in the image by the normalization factor. All the warnings about working ideally in a physical illumination space and not a color space apply.

eh9
  • 7,340
  • 20
  • 43
2

If signal change is global you should try to calculate mean m(i,t) for each line i in each image at time t in your video. Without fluctuating light ratio m(i,t)/m(i,t+1) must be 1 for all time. If there is a global change then for each i m(i,t)/m(i,t+1) must be constant. it's better to use mean value m(i,t)/m(i,t+1) (for all i). This mean value could be use to correct your image at time t.

You can work with ratio like m(i,0)/m(i,t) imag at time 0 is then a reference Instead of line you can use column or disc rectangle...

LBerger
  • 593
  • 2
  • 12
  • The signal change is local yes, but there will also be local color changes I want to detect. So the changes through time won't always be constant everywhere. I have already tried to subtract a minimum global illumination change from the frames, but that had little effect. – FHannes Jul 05 '16 at 08:46
  • 1
    It's not substract but divide. You have got motion in your video so you will new to match image before any procedure – LBerger Jul 06 '16 at 12:06
  • 1
    About filtering method if there is no motion it could be a good answer like suggesting @eh9. I have got some temporal filtering code in https://github.com/LaurentBerger/AmplificationMouvement. – LBerger Jul 06 '16 at 15:27
  • 1
    Instead of eulerian magnification try with Riesz Pyramids for Fast phase-Based Video Magnification – LBerger Jul 07 '16 at 06:47
  • I am looking to magnify color changes though, not motion. As I understand it, Riesz Pyramids are intended to magnify motion? – FHannes Jul 07 '16 at 09:21
  • You wrote " I am experimenting with Eulerian Video Magnification" i think it's http://people.csail.mit.edu/mrub/vidmag/. There is also this http://people.csail.mit.edu/nwadhwa/phase-video/ for same problem – LBerger Jul 07 '16 at 09:35
1

I think you can apply homomorphic filtering to each of the frames to compute the reflectance component of the frame. Then you can track the varying reflectance at selected points.

According to illumination-reflectance model of image formation, the pixel value at a given position is the product of illumination and reflectance: f(x,y) = i(x,y) . r(x,y). Illumination i tends to vary slowly across the image (or in your case, frame), and reflectance r tends to vary rapidly.

Using homomorphic filtering, you can filter out the illumination component. It takes the logarithm of the above equation, so the ln illumination and reflectance components become additive: ln(f(x,y)) = ln(i(x,y)) + ln(r(x,y)). Now, you apply a high-pass filter to retain the reflectance component (so the slowly varying illumination component is filtered out). Take a look here and here for a detailed explanation of the process with examples.

After applying the filter, you'll have the estimated reflectance frames r^(x,y,t).

dhanushka
  • 10,492
  • 2
  • 37
  • 47