11

I'm aware of AVFoundation and its capture support (not too familiar though). However, I don't see any readily-accessible API to get pixel-by-pixel data (RGB-per-pixel or similar). I do recall reading in the docs that this is possible, but I don't really see how. So:

  1. Can this be done? If so, how?
  2. Would I be getting raw image data, or data that's been JPEG-compressed?
FeifanZ
  • 16,250
  • 7
  • 45
  • 84

2 Answers2

32

AV Foundation can give you back the raw bytes for an image captured by either the video or still camera. You need to set up an AVCaptureSession with an appropriate AVCaptureDevice and a corresponding AVCaptureDeviceInput and AVCaptureDeviceOutput (AVCaptureVideoDataOutput or AVCaptureStillImageOutput). Apple has some examples of this process in their documentation, and it requires some boilerplate code to configure.

Once you have your capture session configured and you are capturing data from the camera, you will set up a -captureOutput:didOutputSampleBuffer:fromConnection: delegate method, where one of the parameters will be a CMSampleBufferRef. That will have a CVImageBufferRef within it that you access via CMSampleBufferGetImageBuffer(). Using CVPixelBufferGetBaseAddress() on that pixel buffer will return the base address of the byte array for the raw pixel data representing your camera frame. This can be in a few different formats, but the most common are BGRA and planar YUV.

I have an example application that uses this here, but I'd recommend that you also take a look at my open source framework which wraps the standard AV Foundation boilerplate and makes it easy to perform image processing on the GPU. Depending on what you want to do with these raw camera bytes, I may already have something you can use there or a means of doing it much faster than with on-CPU processing.

Brad Larson
  • 170,088
  • 45
  • 397
  • 571
  • 3
    Finally got around to this…the code you posted here really helped though: http://stackoverflow.com/a/11615472/472768 Thanks! – FeifanZ Aug 01 '12 at 23:21
  • Could you explain what I should provide in that case in input of this function? `AlprResults recognize(unsigned char* pixelData, int bytesPerPixel, int imgWidth, int imgHeight, std::vector regionsOfInterest);` I understand point only about `pixelData` and `regionsOfInterest`. – Alexander Yakovlev Feb 07 '17 at 07:54
  • @SashaKid - I have no idea what that function does, and that sounds like an entirely separate question. – Brad Larson Feb 08 '17 at 15:04
  • @BradLarson can you give some help with getting color from GPUImage2? At the moment I've done getting output from SolidColorGenerator to RenderView with averaged color. But I have no idea how to get UIColor from it. Can you help me? – WINSergey Oct 30 '18 at 09:53
  • 1
    @WINSergey - If you just need an average color, an AverageColorExtractor will provide a callback that returns the RGBA components of the average color. If you need the color for a specific pixel onscreen, you can attach a RawDataOutput to extract the raw bytes for the image and pull the ones corresponding to the pixel you want. – Brad Larson Oct 30 '18 at 15:29
-4
 lowp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
 float luminance = dot(textureColor.rgb, W);

 mediump vec2 p = textureCoordinate;
 if (p.x == 0.2 && p.x<0.6 && p.y > 0.4 && p.y<0.6) {
     gl_FragColor = vec4(textureColor.r * 1.0, textureColor.g * 1.0, textureColor.b * 1.0, textureColor.a);
 } else {
   gl_FragColor = vec4(textureColor.r * 0.0, textureColor.g * 0.0, textureColor.b * 0.0, textureColor.a *0.0);
 }
Opal
  • 81,889
  • 28
  • 189
  • 210
Ramkumar Paulraj
  • 1,841
  • 2
  • 20
  • 40