3

I am doing video recording. I need to snapshoot a view to a UIImage, and then convert it to CVPixelBufferRef. And it work fine with RGBA color space. But the CVPixelBufferRef I need should be with YUV color space.

Anyone have any ideas? Thanks.

+ (CVPixelBufferRef) pixelBufferFromLayer:(CALayer *)layer forSize:(CGSize)size
{
    UIImage * image = [self fetchScreenShotFromLayer:layer forSize:size];

// this worked fine
//    CVPixelBufferRef rgbBuffer = [self RGBPixelBufferFromCGImage:image.CGImage];
//    return rgbBuffer;

//    NSData * imageData = UIImageJPEGRepresentation(image, 0.5);
    NSData * imageData = UIImagePNGRepresentation(image);
    CVPixelBufferRef buffer = [self yuvPixelBufferWithData:imageData width:size.width heigth:size.height];
    return buffer;
}

create CVPixelBufferRef with RGB color space is fine.

// RGB color space pixel buffer
+ (CVPixelBufferRef) RGBPixelBufferFromCGImage:(CGImageRef)image
{
    NSDictionary * options = @{
                               (NSString *)kCVPixelBufferCGImageCompatibilityKey: @(YES),
                               (NSString *)kCVPixelBufferCGBitmapContextCompatibilityKey: @(YES),
                               };

    CVPixelBufferRef pxbuffer = NULL;

    CGFloat frameWidth = CGImageGetWidth(image);
    CGFloat frameHeight = CGImageGetHeight(image);

    CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, frameWidth, frameHeight, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef)options, &pxbuffer);

    NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);

    CVPixelBufferLockBaseAddress(pxbuffer, 0);
    void * pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
    NSParameterAssert(pxdata != NULL);

    CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef context = CGBitmapContextCreate(pxdata, frameWidth, frameHeight, 8, CVPixelBufferGetBytesPerRow(pxbuffer), rgbColorSpace, (CGBitmapInfo)kCGImageAlphaNoneSkipFirst);

    NSParameterAssert(context);
    CGContextConcatCTM(context, CGAffineTransformIdentity);
    CGContextDrawImage(context, CGRectMake(0, 0, frameWidth, frameHeight), image);
    CGColorSpaceRelease(rgbColorSpace);
    CGContextRelease(context);

    CVPixelBufferUnlockBaseAddress(pxbuffer, 0);

    return pxbuffer;
}

// snapshoot for layer
+ (UIImage *) fetchScreenShotFromLayer:(CALayer *)layer forSize:(CGSize)size
{
    UIImage * image = nil;

    @autoreleasepool {
        NSLock * aLock = [NSLock new];
        [aLock lock];

        CGSize imageSize = size;
        UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
        CGContextRef context = UIGraphicsGetCurrentContext();
        [layer renderInContext:context];
        image = UIGraphicsGetImageFromCurrentImageContext();
        UIGraphicsEndImageContext();

        [aLock unlock];
    }

    return image;
}

Something is wrong with this.

// data to yuv buffer
+ (CVPixelBufferRef)yuvPixelBufferWithData:(NSData *)dataFrame
                                     width:(size_t)w
                                    heigth:(size_t)h
{
    unsigned char* buffer = (unsigned char*) dataFrame.bytes;
    CVPixelBufferRef getCroppedPixelBuffer = [self copyDataFromBuffer:buffer toYUVPixelBufferWithWidth:w Height:h];
    return getCroppedPixelBuffer;
}

+ (CVPixelBufferRef) copyDataFromBuffer:(const unsigned char*)buffer toYUVPixelBufferWithWidth:(size_t)w Height:(size_t)h
{
    NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
                             [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
                             [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
                             nil];

    CVPixelBufferRef pixelBuffer;
    CVPixelBufferCreate(NULL, w, h, kCVPixelFormatType_420YpCbCr8BiPlanarFullRange, (__bridge CFDictionaryRef)(options), &pixelBuffer);

    size_t count = CVPixelBufferGetPlaneCount(pixelBuffer);
    NSLog(@"PlaneCount = %zu", count);  // 2

    CVPixelBufferLockBaseAddress(pixelBuffer, 0);

    size_t d = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0);
    const unsigned char* src = buffer;
    unsigned char* dst = (unsigned char *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);

    for (unsigned int rIdx = 0; rIdx < h; ++rIdx, dst += d, src += w) {
        memcpy(dst, src, w);
    }

    d = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 1);
    dst = (unsigned char *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
    h = h >> 1;
    for (unsigned int rIdx = 0; rIdx < h; ++rIdx, dst += d, src += w) {
        memcpy(dst, src, w);
    }

    CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);

    return pixelBuffer;
}

Here is the images.

origin image

rgb buffer

YUV buffer

Thanks for your help.

guojing
  • 129
  • 14

1 Answers1

1

You can get the rawdata from UIImage which is RGBA format, then translate to yuv format, and use the yuv data to fill the different planes of the CVPixelBufferRef which you get from CMSampleBufferRef. The CMSampleBufferRef here is the parameter of captureOutput. Just remember to set the video setting as kCVPixelFormatType_420YpCbCr8BiPlanarFullRange (If this is what you want.) when initialize AVCaptureVideoDataOutput of AVCaptureSession.

Xiaoqi
  • 88
  • 2
  • 7
  • Could you point us in the right direction on how to access the raw data of UIImage and how to translate this data into the YUV format? – Jovan May 29 '20 at 13:20
  • 1
    get raw data of UIImage: https://stackoverflow.com/questions/448125/how-to-get-pixel-data-from-a-uiimage-cocoa-touch-or-cgimage-core-graphics And there're a lot of answers to transform from yuv to rgb or reverse, here's one: https://stackoverflow.com/questions/17892346/how-to-convert-rgb-yuv-rgb-both-ways – Xiaoqi Jun 24 '20 at 03:05