I'd like to analyze the constantly updating image feed that comes from an iPhone camera to determine a general "lightness coefficient". Meaning: if the coefficient returns 0.0, the image is completely black, if it returns 1.0 the image is completely white. Of course all values in between are the ones that I care about the most (background info: I'm using this coefficient to calculate the intensity of some blending effects in my fragment shader).
So I'm wondering if I should run a for loop over my pixelbuffer and analyze the image every frame (30 fps) and send the coeff as a uniform to my fragment shader or is there a way to analyze my image in OpenGL. If so, how should I do that?