1

I was using CGImageCreate with CGColorSpaceCreateDeviceGray to convert a buffer (CVPixelBufferRef) to grayscale image. It was very fast and did work well until iOS 12... now the returned image is empty.

The code look like this:

bitmapInfo = kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipFirst;

CGDataProviderRef provider = CGDataProviderCreateWithData((void *)i_PixelBuffer,
                                                              sourceBaseAddr,
                                                              sourceRowBytes * height,
                                                              ReleaseCVPixelBuffer);
    retImage = CGImageCreate(width,
                             height,
                             8,
                             32,
                             sourceRowBytes,
                             CGColorSpaceCreateDeviceGray(),
                             bitmapInfo,
                             provider,
                             NULL,
                             true,
                             kCGRenderingIntentDefault);
    CGDataProviderRelease(provider);

This is a known bug in iOS 12? If device gray is no supported anymore in this function, can you suggest me another way to do it?

Note that conversion should take less than 0.1 seconds for a 4K image.

Thanks in advance!

David Gagnon
  • 149
  • 1
  • 7
  • I suppose you could treat the buffer as a CIImage and desaturate or similar. – matt Oct 23 '18 at 01:14
  • By the way, I wonder whether this is related in some way to the new iOS 12 feature where a graphics context self-configures its depth depending on what you draw into it. That new feature has caused me quite a bit of trouble too. – matt Oct 23 '18 at 01:20

2 Answers2

0

According to the list of Supported Pixel Formats in the Quartz 2D Programming Guide, iOS doesn't support 32 bits per pixel with gray color spaces. And even on macOS, 32 bpp gray requires the use of kCGBitmapFloatComponents (and float data).

Is your data really 32 bpp? If so, is it float? What are you using for bitmapInfo?

I would not expect CGImageCreate() to "convert" a buffer, including to grayscale. The parameters you're supplying are telling it how to interpret the data. If you're not using floating-point components, I suspect it was just taking one of the color channels and interpreting that as the gray level and ignoring the other components. So, it wasn't a proper grayscale conversion.

Apple's advice is to create an image that properly represents the image; create a bitmap context with the colorspace, pixel layout, and bitmap info you desire; draw the former into the latter; and create the final image from the context.

Ken Thomases
  • 88,520
  • 7
  • 116
  • 154
  • You know much more about this than I do but I can't help feeling we're not getting at why this supposedly worked in iOS 11. – matt Oct 23 '18 at 03:40
  • @matt My guess is in my third paragraph, but we can't be sure without a) more info and b) some testing of source and resulting images, etc. It's an interesting mystery, but… \*shrug\* – Ken Thomases Oct 23 '18 at 03:56
  • Thanks for responding to my question! My data is probably ok, because it has been working for a long time, currently data is kCVPixelFormatType_32BGRA. And yes, it is possible that one of the color channel was used for black and white conversion. – David Gagnon Oct 27 '18 at 19:59
0

I finally found a workaround for my purpose. Note that the CVPixelBuffer is coming from the video camera.

  • Changed camera output pixel format to
    kCVPixelFormatType_420YpCbCr8BiPlanarFullRange (AVCaptureVideoDataOutput)
  • Extract the Y plane from YpCbCr
  • Build a CGImage with the Y plane

Code:

// some code

colorSpace = CGColorSpaceCreateDeviceGray();
sourceRowBytes = CVPixelBufferGetBytesPerRowOfPlane(i_PixelBuffer, 0);
sourceBaseAddr = (unsigned char*)CVPixelBufferGetBaseAddressOfPlane(i_PixelBuffer,0);
bitmapInfo = kCGImageByteOrderDefault;

// some code

CGContextRef context = CGBitmapContextCreate(sourceBaseAddr,
                                             width,
                                             height,
                                             8,
                                             sourceRowBytes,
                                             colorSpace,
                                             bitmapInfo);
retImage = CGBitmapContextCreateImage(context);

// some code

You can also look at this related post: 420YpCbCr8BiPlanarVideoRange To YUV420 ?/How to copy Y and Cbcr plane to Single plane?

David Gagnon
  • 149
  • 1
  • 7