14

I want to convert a UIImage object to a CVPixelBufferRef object, but I have absolutly no idea. And I can't find any example code doing anything like this.

Can someone please help me? THX in advance!

C YA

Noah Witherspoon
  • 57,021
  • 16
  • 130
  • 131
Nuker
  • 2,577
  • 5
  • 21
  • 32

5 Answers5

9

There are different ways to do that, those functions convert a pixel buffer from a CGImage. UImage is a wrapper around CGImage, thus to get a CGImage you just need to call the method .CGImage.
The other ways are also create a CIImage from the buffer (already posted) or use the Accelerate framework, that is probably the fastest but also the hardest.

- (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image
{
    NSDictionary *options = @{
                              (NSString*)kCVPixelBufferCGImageCompatibilityKey : @YES,
                              (NSString*)kCVPixelBufferCGBitmapContextCompatibilityKey : @YES,
                              };

    CVPixelBufferRef pxbuffer = NULL;
    CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, CGImageGetWidth(image),
                        CGImageGetHeight(image), kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
                        &pxbuffer);
    if (status!=kCVReturnSuccess) {
        NSLog(@"Operation failed");
    }
    NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);

    CVPixelBufferLockBaseAddress(pxbuffer, 0);
    void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);

    CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef context = CGBitmapContextCreate(pxdata, CGImageGetWidth(image),
                                                 CGImageGetHeight(image), 8, 4*CGImageGetWidth(image), rgbColorSpace,
                                                 kCGImageAlphaNoneSkipFirst);
    NSParameterAssert(context);

    CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
    CGAffineTransform flipVertical = CGAffineTransformMake( 1, 0, 0, -1, 0, CGImageGetHeight(image) );
    CGContextConcatCTM(context, flipVertical);
    CGAffineTransform flipHorizontal = CGAffineTransformMake( -1.0, 0.0, 0.0, 1.0, CGImageGetWidth(image), 0.0 );
    CGContextConcatCTM(context, flipHorizontal);

    CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
                                           CGImageGetHeight(image)), image);
    CGColorSpaceRelease(rgbColorSpace);
    CGContextRelease(context);

    CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
    return pxbuffer;
}
Steve Wilford
  • 8,894
  • 5
  • 42
  • 66
Andrea
  • 26,120
  • 10
  • 85
  • 131
  • why would you flip vertical and horizontal? – Maxi Mus Sep 13 '16 at 12:16
  • @MaxiMus Because UIKit and core grphics have different coordinate systems https://developer.apple.com/library/content/documentation/2DDrawing/Conceptual/DrawingPrintingiOS/GraphicsDrawingOverview/GraphicsDrawingOverview.html – Andrea Sep 16 '16 at 20:05
  • The y coordinates yes, but not the x... ? – Maxi Mus Sep 19 '16 at 08:52
  • Hey i have used you code and I am getting crash :- 2019-03-19 12:08:01.233465+0530 YoPlay[3693:866159] *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Invalid parameter not satisfying: status == kCVReturnSuccess && pxbuffer != NULL' can you please help me with this? – Khush Mar 19 '19 at 06:40
3

You can use Core Image to create a CVPixelBuffer from a UIImage.

// 1. Create a CIImage with the underlying CGImage encapsulated by the UIImage (referred to as 'image'):

CIImage *inputImage = [CIImage imageWithCGImage:image.CGImage];

// 2. Create a CIContext:

Context *ciContext = [CIContext contextWithCGContext:UIGraphicsGetCurrentContext() options:nil];

// 3. Render the CIImage to a CVPixelBuffer (referred to as 'outputBuffer'):

[self.ciContext render:img toCVPixelBuffer:outputBuffer];

AVFoundation provides classes that read video files (called assets) and from the output of other AVFoundation objects that handle (or have already read) assets into pixel buffers. If that is your only concern, you'll find what you're looking for in the Sample Photo Editing Extension sample code.

If your source is generated from a series of UIImage objects (perhaps there was no source file, and you are creating a new file from user-generated content), then the sample code provided above will suffice.

NOTE: It is not the most efficient means nor the only means to convert a UIImage into a CVPixelBuffer; but, it is BY FAR the easiest means. Using Core Graphics to convert a UIImage into a CVPixelBuffer requires a lot more code to set up attributes, such as pixel buffer size and colorspace, which Core Image takes care of for you.

James Bush
  • 1,485
  • 14
  • 19
  • I am not sure what is causing this. But CIContext creation causes an error, in a multi-threaded context. The weird thing is that this only happens when I am processing the images quickly. When I do it slowly, this doesn't seem to be an issue. – nnrales Mar 21 '18 at 23:04
  • 1
    I have no problem, but I've learned all off the in-and-outs of off every image-processing framework available for iOS. I'd have to see your code to nail it down for sure, but here's something to try: create a single, shared instance of CIContext to reuse for each image; and, make sure you're creating the right context. They are each very different; and, while the compiler may let you create any one of them, not all of them work in all cases. The documentation is well-improved; consult it for more. – James Bush Mar 27 '18 at 16:29
0

Google is always your friend. Searching for "CVPixelBufferRef" the first result leads to this snippet from snipplr:

- (CVPixelBufferRef)fastImageFromNSImage:(NSImage *)image{
CVPixelBufferRef buffer = NULL;

// config
size_t width = [image size].width;
size_t height = [image size].height;
size_t bitsPerComponent = 8; // *not* CGImageGetBitsPerComponent(image);
CGColorSpaceRef cs = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
CGBitmapInfo bi = kCGImageAlphaNoneSkipFirst; // *not* CGImageGetBitmapInfo(image);
NSDictionary *d = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey, [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey, nil];

// create pixel buffer
CVPixelBufferCreate(kCFAllocatorDefault, width, height, k32ARGBPixelFormat, (CFDictionaryRef)d, &buffer);
CVPixelBufferLockBaseAddress(buffer, 0);
void *rasterData = CVPixelBufferGetBaseAddress(buffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(buffer);

// context to draw in, set to pixel buffer's address
CGContextRef ctxt = CGBitmapContextCreate(rasterData, width, height, bitsPerComponent, bytesPerRow, cs, bi);
if(ctxt == NULL){
    NSLog(@"could not create context");
    return NULL;
}

// draw
NSGraphicsContext *nsctxt = [NSGraphicsContext graphicsContextWithGraphicsPort:ctxt flipped:NO];
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:nsctxt];
[image compositeToPoint:NSMakePoint(0.0, 0.0) operation:NSCompositeCopy];
[NSGraphicsContext restoreGraphicsState];

CVPixelBufferUnlockBaseAddress(buffer, 0);
CFRelease(ctxt);

return buffer;
}

No idea if this works at all, though. (Your mileage may wary :)

IlDan
  • 6,851
  • 3
  • 33
  • 34
  • 4
    The question was about converting a `UIImage` to `CVPixelBufferRef`, not `NSImage` – srgtuszy Jan 15 '13 at 10:18
  • 1
    The line about size_t bytesPerRow = CVPixelBufferGetBytesPerRow(buffer); is what did the trick for me. All other code samples were using a magical constant that happens to be wrong on this device, being able to determine the real value was the key. Thanks. – Brian Trzupek Feb 16 '16 at 17:03
0

Very late though but for someone who needs.

 // call like this
    CVPixelBufferRef videobuffer = [self pixelBufferFromCGImage:yourImage.CGImage]; 

// method that converts

-(CVPixelBufferRef)pixelBufferFromCGImage:(CGImageRef)image{

    NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
                             [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
                             [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey, nil];
    CVPixelBufferRef pxbuffer = NULL;
    CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, videoSize.width, videoSize.height,
                                          kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef)options, &pxbuffer);
    NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);

    CVPixelBufferLockBaseAddress(pxbuffer, 0);
    void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
    NSParameterAssert(pxdata != NULL);

    CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef context = CGBitmapContextCreate(pxdata, videoSize.width, videoSize.height, 8, 4*videoSize.width, rgbColorSpace, kCGImageAlphaNoneSkipFirst);
    NSParameterAssert(context);
    CGContextConcatCTM(context, CGAffineTransformIdentity);
    CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image), CGImageGetHeight(image)), image);
    CGColorSpaceRelease(rgbColorSpace);
    CGContextRelease(context);
    CVPixelBufferUnlockBaseAddress(pxbuffer, 0);

    return pxbuffer;
}
  • I've tried this. Using this the image frame gets cropped. So here is the solution CGContextDrawImage(context, CGRectMake(0, 0, videoSize.width, videoSize.height), image); – Talha Ahmad Khan Sep 25 '18 at 06:14
-1

A CVPixelBufferRef is what core video uses for camera input.

You can create similar pixel bitmaps from images using CGBitmapContextCreate and then drawing the image into the bitmap context.

hotpaw2
  • 70,107
  • 14
  • 90
  • 153
  • 1
    What I want to do is to add single frames to a movie using the AV Foundation. This will be done using the class AVAssetWriterInputPixelBufferAdaptor. But this class expects CVPixelBufferRef objects. So, how can I convert a UIImage to a CVPixelBufferRef object? – Nuker Oct 02 '10 at 15:02
  • The question asks how to convert a UIImagte to CVPixelBufferRef. This answer offers no solution. – Gene Z. Ragan May 28 '20 at 19:38