1

I'd like to analyze the constantly updating image feed that comes from an iPhone camera to determine a general "lightness coefficient". Meaning: if the coefficient returns 0.0, the image is completely black, if it returns 1.0 the image is completely white. Of course all values in between are the ones that I care about the most (background info: I'm using this coefficient to calculate the intensity of some blending effects in my fragment shader).

So I'm wondering if I should run a for loop over my pixelbuffer and analyze the image every frame (30 fps) and send the coeff as a uniform to my fragment shader or is there a way to analyze my image in OpenGL. If so, how should I do that?

Brad Larson
  • 170,088
  • 45
  • 397
  • 571
polyclick
  • 2,704
  • 4
  • 32
  • 58
  • Which OpenGL version is available? – KillianDS Aug 21 '12 at 10:52
  • 1
    A similar question was asked more recently here: http://stackoverflow.com/questions/12168072/fragment-shader-average-luminosity , and the answers there describe a few ways of doing this using OpenGL ES. – Brad Larson Sep 04 '12 at 20:27

2 Answers2

2

There are more answers, each one with its own strong and weak points.

On CPU, it's fairly simple: cycle through the pixels, sum them up, divide, and that's it. It's a five minutes work. It will take a few milliseconds with a good implementation.

int i, sum = 0, count = width * height * channels;
for(i=0;i<count;i++)
    avg += buffer[i];
double avg = double(sum) / double(count);

On GPU, it is likely to be much faster, but there are a few drawbacks: First one is the amount of work needed just to put everything in place. The GPUImage framework will save you some work, but it will also add a lot of code. If all you want to do is to sum the pixels, it may be a waste. Second problem is that sending pixels to GPU may take more than summing them up in the CPU. Actually, the GPU will justify the work only if you really need serious processing.

A third option, to use the CPU with a library, has the drawback that you add a lot of code for what you can do in 10 lines. But the result will be beautiful. Again, it justify if you also use the lib for other tasks. Here is an example in OpenCV:

cv::Mat frame(buffer, width, height, channels, type);
double avgLuminance = cv::sum(frame)/(double(frame.total()*frame.channels()));
Sam
  • 19,708
  • 4
  • 59
  • 82
  • As a note, I am working on a GPU-based implementation of whole-image color averaging (which could be combined with a luminance conversion to do what is wanted here). Iterating through every pixel on the CPU is an extremely slow process, although Accelerate or lower level NEON instructions can help you here. Apple describes a GPU-based iterative reduction process here: http://http.developer.nvidia.com/GPUGems3/gpugems3_ch26.html that should be many times faster than this. I hopefully will have this functional soon. – Brad Larson Aug 22 '12 at 16:04
  • Seems that for your needs, an exact sum is overkill. you could very well use only 1 pixel in each 10x10 block. No need for ruining your poor iPhone bandwidth by sending an image to OpenGL @30 fps. – Calvin1602 Aug 23 '12 at 13:34
  • OK, I ended up getting this working using OpenGL ES, and I describe the process here: http://stackoverflow.com/a/12169560/19679 . Using the GPUImageLuminosity class, you can get the average luminance from the iOS camera by using just a few lines of code. It ended up being more than 3X faster than an on-CPU iteration like you describe above, with all image upload and extraction overhead factored in. – Brad Larson Sep 04 '12 at 20:31
-1

There is of course OpenCL, which allows you to use your GPU for general processing.

life of pi
  • 19
  • 2