57

I'm having some problems getting a UIImage from a CVPixelBuffer. This is what I am trying:

CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(imageDataSampleBuffer);
CFDictionaryRef attachments = CMCopyDictionaryOfAttachments(kCFAllocatorDefault, imageDataSampleBuffer, kCMAttachmentMode_ShouldPropagate);
CIImage *ciImage = [[CIImage alloc] initWithCVPixelBuffer:pixelBuffer options:(NSDictionary *)attachments];
if (attachments)
    CFRelease(attachments);
size_t width = CVPixelBufferGetWidth(pixelBuffer);
size_t height = CVPixelBufferGetHeight(pixelBuffer);
if (width && height) { // test to make sure we have valid dimensions
    UIImage *image = [[UIImage alloc] initWithCIImage:ciImage];
    
    UIImageView *lv = [[UIImageView alloc] initWithFrame:self.view.frame];
    lv.contentMode = UIViewContentModeScaleAspectFill;
    self.lockedView = lv;
    [lv release];
    self.lockedView.image = image;
    [image release];
}
[ciImage release];

height and width are both correctly set to the resolution of the camera. image is created but I it seems to be black (or maybe transparent?). I can't quite understand where the problem is. Any ideas would be appreciated.

knuku
  • 6,082
  • 1
  • 35
  • 43
mahboudz
  • 39,196
  • 16
  • 97
  • 124
  • You definitely want a CIImage in between, e.g. because you're going to throw some intermediate CIFilters in, or would it be acceptable just to go CGBitmapContextCreate -> UIImage? – Tommy Nov 10 '11 at 12:21
  • For now, I just want to display it in a view and see what I am dealing with. Down the road, I'd like to play with the pixels. – mahboudz Nov 11 '11 at 10:31

6 Answers6

61

First of all the obvious stuff that doesn't relate directly to your question: AVCaptureVideoPreviewLayer is the cheapest way to pipe video from either of the cameras into an independent view if that's where the data is coming from and you've no immediate plans to modify it. You don't have to do any pushing yourself, the preview layer is directly connected to the AVCaptureSession and updates itself.

I have to admit to lacking confidence about the central question. There's a semantic difference between a CIImage and the other two types of image — a CIImage is a recipe for an image and is not necessarily backed by pixels. It can be something like "take the pixels from here, transform like this, apply this filter, transform like this, merge with this other image, apply this filter". The system doesn't know what a CIImage looks like until you chose to render it. It also doesn't inherently know the appropriate bounds in which to rasterise it.

UIImage purports merely to wrap a CIImage. It doesn't convert it to pixels. Presumably UIImageView should achieve that, but if so then I can't seem to find where you'd supply the appropriate output rectangle.

I've had success just dodging around the issue with:

CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];

CIContext *temporaryContext = [CIContext contextWithOptions:nil];
CGImageRef videoImage = [temporaryContext
                   createCGImage:ciImage
                   fromRect:CGRectMake(0, 0, 
                          CVPixelBufferGetWidth(pixelBuffer),
                          CVPixelBufferGetHeight(pixelBuffer))];

UIImage *uiImage = [UIImage imageWithCGImage:videoImage];
CGImageRelease(videoImage);

With gives an obvious opportunity to specify the output rectangle. I'm sure there's a route through without using a CGImage as an intermediary so please don't assume this solution is best practice.

jjxtra
  • 20,415
  • 16
  • 100
  • 140
Tommy
  • 99,986
  • 12
  • 185
  • 204
  • Thanks, I will give it a try. The reason that the previewLayer isn't useful is that I need more resolution. And the reason I am going with CIIImage instead of a jpegrepresentation is to see whether the jpeg compression is adding significant artifacts. I may, in fact, choose to stay with jpeg if the artifacts are minimal. – mahboudz Nov 11 '11 at 21:28
58

Try this one in Swift.

Swift 4.2:

import VideoToolbox

extension UIImage {
    public convenience init?(pixelBuffer: CVPixelBuffer) {
        var cgImage: CGImage?
        VTCreateCGImageFromCVPixelBuffer(pixelBuffer, nil, &cgImage)

        guard let cgImage = cgImage else {
            return nil
        }

        self.init(cgImage: cgImage)
    }
}

Swift 5:

import VideoToolbox

extension UIImage {
    public convenience init?(pixelBuffer: CVPixelBuffer) {
        var cgImage: CGImage?
        VTCreateCGImageFromCVPixelBuffer(pixelBuffer, options: nil, imageOut: &cgImage)

        guard let cgImage = cgImage else {
            return nil
        } 

        self.init(cgImage: cgImage)
    }
}

Note: This only works for RGB pixel buffers, not for grayscale.

Todd
  • 317
  • 1
  • 11
Andrey M.
  • 3,021
  • 29
  • 42
  • 10
    You should `import VideoToolbox` – Husam Jul 21 '18 at 01:26
  • I had to change the this line in swift 5: ```VTCreateCGImageFromCVPixelBuffer(pixelBuffer, options: nil, imageOut: &cgImage)``` – Olshansky Apr 08 '19 at 00:21
  • this method is slow. Any other alternative ? – Pavan K Jul 26 '19 at 21:55
  • Can you convert only a selected (x, y, width, height) region of pixel buffer to an image because I presume you can save lots of resources if you use it often in an application. – Hope Aug 18 '20 at 06:56
  • 1
    This is the best answer. Don't get discouraged by "this method is slow" comment, as it's not exactly correct. The conversion from `CVPixelBuffer` to `CGImage` described in this answer took only 0.0004 sec on average (I tested the speed on iPhone 10, with `AVVideoCodecType.jpeg]`). UIImage creation from CGImage can be slow-ish regardless of how CGImage is generated, not a problem with this particular method. – timbre timbre Dec 02 '21 at 18:48
12

Another way to get an UIImage. Performs ~10 times faster, at least in my case:

int w = CVPixelBufferGetWidth(pixelBuffer);
int h = CVPixelBufferGetHeight(pixelBuffer);
int r = CVPixelBufferGetBytesPerRow(pixelBuffer);
int bytesPerPixel = r/w;

unsigned char *buffer = CVPixelBufferGetBaseAddress(pixelBuffer);

UIGraphicsBeginImageContext(CGSizeMake(w, h));

CGContextRef c = UIGraphicsGetCurrentContext();

unsigned char* data = CGBitmapContextGetData(c);
if (data != NULL) {
   int maxY = h;
   for(int y = 0; y<maxY; y++) {
      for(int x = 0; x<w; x++) {
         int offset = bytesPerPixel*((w*y)+x);
         data[offset] = buffer[offset];     // R
         data[offset+1] = buffer[offset+1]; // G
         data[offset+2] = buffer[offset+2]; // B
         data[offset+3] = buffer[offset+3]; // A
      }
   }
} 
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();

UIGraphicsEndImageContext();
Jonathan Cichon
  • 4,396
  • 16
  • 19
  • You should use an incrementing pointer, that will get you a tiny speed boost as well – jjxtra Feb 06 '13 at 17:09
  • 10
    You'll need to insert a call to CVPixelBufferLockBaseAddress before the call to CVPixelBufferGetBaseAddress and call CVPixelBufferUnlockBaseAddress after the data copy. Also you might want to consider using CVPixelBufferGetDataSize and memcpy() to perform a single block copy of the data. – Dave Durbin Feb 09 '14 at 13:27
  • 5
    faster than what? – Benedikt S. Vogler Jan 04 '20 at 16:25
  • Would be you kind enough for writing a Swift version of this? Is this even possible in Swift? @Jonathan Cichon – MJQZ1347 Mar 03 '20 at 16:13
8

A modern solution would be

let image = UIImage(ciImage: CIImage(cvPixelBuffer: YOUR_BUFFER))
bitemybyte
  • 971
  • 1
  • 10
  • 24
7

Unless your image data is in some different format that requires swizzle or conversion - i would recommend no incrementing of anything... just smack the data into your context memory area with memcpy as in:

//not here... unsigned char *buffer = CVPixelBufferGetBaseAddress(pixelBuffer);

UIGraphicsBeginImageContext(CGSizeMake(w, h));

CGContextRef c = UIGraphicsGetCurrentContext();

void *ctxData = CGBitmapContextGetData(c);

// MUST READ-WRITE LOCK THE PIXEL BUFFER!!!!
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
void *pxData = CVPixelBufferGetBaseAddress(pixelBuffer);
memcpy(ctxData, pxData, 4 * w * h);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);

... and so on...
joe
  • 479
  • 5
  • 2
  • I've got ~50% fps boost on older devices compared to the CGImageCreate path. Thanks! – Anton Tropashko Nov 17 '15 at 09:12
  • 3
    Be careful though because there are often padding bytes at the end of rows in the CVPixelBuffer. I.e. CVPixelBufferGetBytesPerRow may be more than you expect. Then your copied image output will look all slanty. – Baxissimo May 16 '17 at 23:59
3

The previous methods led me to have CG Raster Data leak. This method of conversion did not leak for me:

@autoreleasepool {

    CGImageRef cgImage = NULL;
    OSStatus res = CreateCGImageFromCVPixelBuffer(pixelBuffer,&cgImage);
    if (res == noErr){
        UIImage *image= [UIImage imageWithCGImage:cgImage scale:1.0 orientation:UIImageOrientationUp];

    }
    CGImageRelease(cgImage);
}


    static OSStatus CreateCGImageFromCVPixelBuffer(CVPixelBufferRef pixelBuffer, CGImageRef *imageOut)
    {
        OSStatus err = noErr;
        OSType sourcePixelFormat;
        size_t width, height, sourceRowBytes;
        void *sourceBaseAddr = NULL;
        CGBitmapInfo bitmapInfo;
        CGColorSpaceRef colorspace = NULL;
        CGDataProviderRef provider = NULL;
        CGImageRef image = NULL;

        sourcePixelFormat = CVPixelBufferGetPixelFormatType( pixelBuffer );
        if ( kCVPixelFormatType_32ARGB == sourcePixelFormat )
            bitmapInfo = kCGBitmapByteOrder32Big | kCGImageAlphaNoneSkipFirst;
        else if ( kCVPixelFormatType_32BGRA == sourcePixelFormat )
            bitmapInfo = kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipFirst;
        else
            return -95014; // only uncompressed pixel formats

        sourceRowBytes = CVPixelBufferGetBytesPerRow( pixelBuffer );
        width = CVPixelBufferGetWidth( pixelBuffer );
        height = CVPixelBufferGetHeight( pixelBuffer );

        CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
        sourceBaseAddr = CVPixelBufferGetBaseAddress( pixelBuffer );

        colorspace = CGColorSpaceCreateDeviceRGB();

        CVPixelBufferRetain( pixelBuffer );
        provider = CGDataProviderCreateWithData( (void *)pixelBuffer, sourceBaseAddr, sourceRowBytes * height, ReleaseCVPixelBuffer);
        image = CGImageCreate(width, height, 8, 32, sourceRowBytes, colorspace, bitmapInfo, provider, NULL, true, kCGRenderingIntentDefault);

        if ( err && image ) {
            CGImageRelease( image );
            image = NULL;
        }
        if ( provider ) CGDataProviderRelease( provider );
        if ( colorspace ) CGColorSpaceRelease( colorspace );
        *imageOut = image;
        return err;
    }

    static void ReleaseCVPixelBuffer(void *pixel, const void *data, size_t size)
    {
        CVPixelBufferRef pixelBuffer = (CVPixelBufferRef)pixel;
        CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
        CVPixelBufferRelease( pixelBuffer );
    }
Vlad
  • 5,727
  • 3
  • 38
  • 59